Search results for: indicator estimation
227 The Effects of Branding on Profitability of Banks in Ghana
Authors: Evans Oteng, Clement Yeboah, Alexander Otechere-Fianko
Abstract:
In today’s economy, despite achievements and advances in the banking and financial institutions, there are challenges that will require intensive attempts on the portion of the banks in Ghana. The perceived decline in profitability of banks seems to have emanated from ineffective branding. Hence, the purpose of this quantitative descriptive-correlational study was to examine the effects of branding on the profitability of banks in Ghana. The researchers purposively sampled some 116 banks in Ghana. Self-developed Likert scale questionnaires were administered to the finance officers of the financial institutions. The results were found to be statistically significant, F (1, 114) = 4. 50, p = .036. This indicates that those banks in Ghana with good branding practices have strong marketing tools to identify and sell their products and services and, as such, have a big market share. The correlation coefficients indicate that branding has a positive correlation with profitability and are statistically significant (r=.207, p<0.05), which signifies that as branding increases, the return on equity’s profitability indicator improves and vice versa. Future researchers can consider other factors beyond branding, such as online banking. The study has significant implications for the success and competitive advantage of those banks that effective branding allows them to differentiate themselves from their competitors. A strong and unique brand identity can help a bank stand out in a crowded market, attract customers, and build customer loyalty. This can lead to increased market share and profitability. Branding influences customer perception and trust. A well-established and reputable brand can create a positive image in the minds of customers, enhancing their confidence in the bank's products and services. This can result in increased customer acquisition, customer retention and a positive impact on profitability. Banks with strong brands can leverage their reputation and customer trust to cross-sell additional products and services. When customers have confidence in the brand, they are more likely to explore and purchase other offerings from the same institution. Cross-selling can boost revenue streams and profitability. Successful branding can open up opportunities for brand extensions and diversification into new products or markets. Banks can leverage their trusted brand to introduce new financial products or expand their presence into related areas, such as insurance or investment services. This can lead to additional revenue streams and improved profitability. This study can have implications for education. Thus, increased profitability of banks due to effective branding can result in higher financial resources available for corporate social responsibility (CSR) activities. Banks may invest in educational initiatives, such as scholarships, grants, research projects, and sponsorships, to support the education sector in Ghana. Also, this study can have implications for logistics and supply chain management. Thus, strong branding can create trust and credibility among customers, leading to increased customer loyalty. This loyalty can positively impact the bank's relationships with its suppliers and logistics partners. It can result in better negotiation power, improved supplier relationships, and enhanced supply chain coordination, ultimately leading to more efficient and cost-effective logistics operations.Keywords: branding, profitability, competitors, customer loyalty, customer retention, corporate social responsibility, cost-effective, logistics operations
Procedia PDF Downloads 77226 Rethinking Urban Voids: An Investigation beneath the Kathipara Flyover, Chennai into a Transit Hub by Adaptive Utilization of Space
Authors: V. Jayanthi
Abstract:
Urbanization and pace of urbanization have increased tremendously in last few decades. More towns are now getting converted into cities. Urbanization trend is seen all over the world but is becoming most dominant in Asia. Today, the scale of urbanization in India is so huge that Indian cities are among the fastest-growing in the world, including Bangalore, Hyderabad, Pune, Chennai, Delhi, and Mumbai. Urbanization remains a single predominant factor that is continuously linked to the destruction of urban green spaces. With reference to Chennai as a case study, which is suffering from rapid deterioration of its green spaces, this paper sought to fill this gap by exploring key factors aside urbanization that is responsible for the destruction of green spaces. The paper relied on a research approach and triangulated data collection techniques such as interviews, focus group discussion, personal observation and retrieval of archival data. It was observed that apart from urbanization, problem of ownership of green space lands, low priority to green spaces, poor maintenance, enforcement of development controls, wastage of underpass spaces, and uncooperative attitudes of the general public, play a critical role in the destruction of urban green spaces. Therefore the paper narrows down to a point, that for a city to have a proper sustainable urban green space, broader city development plans are essential. Though rapid urbanization is an indicator of positive development, it is also accompanied by a host of challenges. Chennai lost a lot of greenery, as the city urbanized rapidly that led to a steep fall in vegetation cover. Environmental deterioration will be the big price we pay if Chennai continues to grow at the expense of greenery. Soaring skyscrapers, multistoried complexes, gated communities, and villas, frame the iconic skyline of today’s Chennai city which reveals that we overlook the importance of our green cover, which is important to balance our urban and lung spaces. Chennai, with a clumped landscape at the center of the city, is predicted to convert 36% of its total area into urban areas by 2026. One major issue is that a city designed and planned in isolation creates underused spaces all around the cities which are of negligence. These urban voids are dead, underused, unused spaces in the cities that are formed due to inefficient decision making, poor land management, and poor coordination. Urban voids have huge potential of creating a stronger urban fabric, exploited as public gathering spaces, pocket parks or plazas or just enhance public realm, rather than dumping of debris and encroachments. Flyovers need to justify their existence themselves by being more than just traffic and transport solutions. The vast, unused space below the Kathipara flyover is a case in point. This flyover connects three major routes: Tambaram, Koyambedu, and Adyar. This research will focus on the concept of urban voids, how these voids under the flyovers, can be used for place making process, how this space beneath flyovers which are neglected, can be a part of the urban realm through urban design and landscaping.Keywords: landscape design, flyovers, public spaces, reclaiming lost spaces, urban voids
Procedia PDF Downloads 282225 Development of Market Penetration for High Energy Efficiency Technologies in Alberta’s Residential Sector
Authors: Saeidreza Radpour, Md. Alam Mondal, Amit Kumar
Abstract:
Market penetration of high energy efficiency technologies has key impacts on energy consumption and GHG mitigation. Also, it will be useful to manage the policies formulated by public or private organizations to achieve energy or environmental targets. Energy intensity in residential sector of Alberta was 148.8 GJ per household in 2012 which is 39% more than the average of Canada 106.6 GJ, it was the highest amount among the provinces on per household energy consumption. Energy intensity by appliances of Alberta was 15.3 GJ per household in 2012 which is 14% higher than average value of other provinces and territories in energy demand intensity by appliances in Canada. In this research, a framework has been developed to analyze the market penetration and market share of high energy efficiency technologies in residential sector. The overall methodology was based on development of data-intensive models’ estimation of the market penetration of the appliances in the residential sector over a time period. The developed models were a function of a number of macroeconomic and technical parameters. Developed mathematical equations were developed based on twenty-two years of historical data (1990-2011). The models were analyzed through a series of statistical tests. The market shares of high efficiency appliances were estimated based on the related variables such as capital and operating costs, discount rate, appliance’s life time, annual interest rate, incentives and maximum achievable efficiency in the period of 2015 to 2050. Results show that the market penetration of refrigerators is higher than that of other appliances. The stocks of refrigerators per household are anticipated to increase from 1.28 in 2012 to 1.314 and 1.328 in 2030 and 2050, respectively. Modelling results show that the market penetration rate of stand-alone freezers will decrease between 2012 and 2050. Freezer stock per household will decline from 0.634 in 2012 to 0.556 and 0.515 in 2030 and 2050, respectively. The stock of dishwashers per household is expected to increase from 0.761 in 2012 to 0.865 and 0.960 in 2030 and 2050, respectively. The increase in the market penetration rate of clothes washers and clothes dryers is nearly parallel. The stock of clothes washers and clothes dryers per household is expected to rise from 0.893 and 0.979 in 2012 to 0.960 and 1.0 in 2050, respectively. This proposed presentation will include detailed discussion on the modelling methodology and results.Keywords: appliances efficiency improvement, energy star, market penetration, residential sector
Procedia PDF Downloads 285224 Time Estimation of Return to Sports Based on Classification of Health Levels of Anterior Cruciate Ligament Using a Convolutional Neural Network after Reconstruction Surgery
Authors: Zeinab Jafari A., Ali Sharifnezhad B., Mohammad Razi C., Mohammad Haghpanahi D., Arash Maghsoudi
Abstract:
Background and Objective: Sports-related rupture of the anterior cruciate ligament (ACL) and following injuries have been associated with various disorders, such as long-lasting changes in muscle activation patterns in athletes, which might last after ACL reconstruction (ACLR). The rupture of the ACL might result in abnormal patterns of movement execution, extending the treatment period and delaying athletes’ return to sports (RTS). As ACL injury is especially prevalent among athletes, the lengthy treatment process and athletes’ absence from sports are of great concern to athletes and coaches. Thus, estimating safe time of RTS is of crucial importance. Therefore, using a deep neural network (DNN) to classify the health levels of ACL in injured athletes, this study aimed to estimate the safe time for athletes to return to competitions. Methods: Ten athletes with ACLR and fourteen healthy controls participated in this study. Three health levels of ACL were defined: healthy, six-month post-ACLR surgery and nine-month post-ACLR surgery. Athletes with ACLR were tested six and nine months after the ACLR surgery. During the course of this study, surface electromyography (sEMG) signals were recorded from five knee muscles, namely Rectus Femoris (RF), Vastus Lateralis (VL), Vastus Medialis (VM), Biceps Femoris (BF), Semitendinosus (ST), during single-leg drop landing (SLDL) and forward hopping (SLFH) tasks. The Pseudo-Wigner-Ville distribution (PWVD) was used to produce three-dimensional (3-D) images of the energy distribution patterns of sEMG signals. Then, these 3-D images were converted to two-dimensional (2-D) images implementing the heat mapping technique, which were then fed to a deep convolutional neural network (DCNN). Results: In this study, we estimated the safe time of RTS by designing a DCNN classifier with an accuracy of 90 %, which could classify ACL into three health levels. Discussion: The findings of this study demonstrate the potential of the DCNN classification technique using sEMG signals in estimating RTS time, which will assist in evaluating the recovery process of ACLR in athletes.Keywords: anterior cruciate ligament reconstruction, return to sports, surface electromyography, deep convolutional neural network
Procedia PDF Downloads 78223 Infestation in Omani Date Palm Orchards by Dubas Bug Is Related to Tree Density
Authors: Lalit Kumar, Rashid Al Shidi
Abstract:
Phoenix dactylifera (date palm) is a major crop in many middle-eastern countries, including Oman. The Dubas bug Ommatissus lybicus is the main pest that affects date palm crops. However not all plantations are infested. It is still uncertain why some plantations get infested while others are not. This research investigated whether tree density and the system of planting (random versus systematic) had any relationship with infestation and levels of infestation. Remote Sensing and Geographic Information Systems were used to determine the density of trees (number of trees per unit area) while infestation levels were determined by manual counting of insects on 40 leaflets from two fronds on each tree, with a total of 20-60 trees in each village. The infestation was recorded as the average number of insects per leaflet. For tree density estimation, WorldView-3 scenes, with eight bands and 2m spatial resolution, were used. The Local maxima method, which depends on locating of the pixel of highest brightness inside a certain exploration window, was used to identify the trees in the image and delineating individual trees. This information was then used to determine whether the plantation was random or systematic. The ordinary least square regression (OLS) was used to test the global correlation between tree density and infestation level and the Geographic Weight Regression (GWR) was used to find the local spatial relationship. The accuracy of detecting trees varied from 83–99% in agricultural lands with systematic planting patterns to 50–70% in natural forest areas. Results revealed that the density of the trees in most of the villages was higher than the recommended planting number (120–125 trees/hectare). For infestation correlations, the GWR model showed a good positive significant relationship between infestation and tree density in the spring season with R² = 0.60 and medium positive significant relationship in the autumn season, with R² = 0.30. In contrast, the OLS model results showed a weaker positive significant relationship in the spring season with R² = 0.02, p < 0.05 and insignificant relationship in the autumn season with R² = 0.01, p > 0.05. The results showed a positive correlation between infestation and tree density, which suggests the infestation severity increased as the density of date palm trees increased. The correlation result showed that the density alone was responsible for about 60% of the increase in the infestation. This information can be used by the relevant authorities to better control infestations as well as to manage their pesticide spraying programs.Keywords: dubas bug, date palm, tree density, infestation levels
Procedia PDF Downloads 193222 Optimal Pricing Based on Real Estate Demand Data
Authors: Vanessa Kummer, Maik Meusel
Abstract:
Real estate demand estimates are typically derived from transaction data. However, in regions with excess demand, transactions are driven by supply and therefore do not indicate what people are actually looking for. To estimate the demand for housing in Switzerland, search subscriptions from all important Swiss real estate platforms are used. These data do, however, suffer from missing information—for example, many users do not specify how many rooms they would like or what price they would be willing to pay. In economic analyses, it is often the case that only complete data is used. Usually, however, the proportion of complete data is rather small which leads to most information being neglected. Also, the data might have a strong distortion if it is complete. In addition, the reason that data is missing might itself also contain information, which is however ignored with that approach. An interesting issue is, therefore, if for economic analyses such as the one at hand, there is an added value by using the whole data set with the imputed missing values compared to using the usually small percentage of complete data (baseline). Also, it is interesting to see how different algorithms affect that result. The imputation of the missing data is done using unsupervised learning. Out of the numerous unsupervised learning approaches, the most common ones, such as clustering, principal component analysis, or neural networks techniques are applied. By training the model iteratively on the imputed data and, thereby, including the information of all data into the model, the distortion of the first training set—the complete data—vanishes. In a next step, the performances of the algorithms are measured. This is done by randomly creating missing values in subsets of the data, estimating those values with the relevant algorithms and several parameter combinations, and comparing the estimates to the actual data. After having found the optimal parameter set for each algorithm, the missing values are being imputed. Using the resulting data sets, the next step is to estimate the willingness to pay for real estate. This is done by fitting price distributions for real estate properties with certain characteristics, such as the region or the number of rooms. Based on these distributions, survival functions are computed to obtain the functional relationship between characteristics and selling probabilities. Comparing the survival functions shows that estimates which are based on imputed data sets do not differ significantly from each other; however, the demand estimate that is derived from the baseline data does. This indicates that the baseline data set does not include all available information and is therefore not representative for the entire sample. Also, demand estimates derived from the whole data set are much more accurate than the baseline estimation. Thus, in order to obtain optimal results, it is important to make use of all available data, even though it involves additional procedures such as data imputation.Keywords: demand estimate, missing-data imputation, real estate, unsupervised learning
Procedia PDF Downloads 285221 A Comparative Study of Sampling-Based Uncertainty Propagation with First Order Error Analysis and Percentile-Based Optimization
Authors: M. Gulam Kibria, Shourav Ahmed, Kais Zaman
Abstract:
In system analysis, the information on the uncertain input variables cause uncertainty in the system responses. Different probabilistic approaches for uncertainty representation and propagation in such cases exist in the literature. Different uncertainty representation approaches result in different outputs. Some of the approaches might result in a better estimation of system response than the other approaches. The NASA Langley Multidisciplinary Uncertainty Quantification Challenge (MUQC) has posed challenges about uncertainty quantification. Subproblem A, the uncertainty characterization subproblem, of the challenge posed is addressed in this study. In this subproblem, the challenge is to gather knowledge about unknown model inputs which have inherent aleatory and epistemic uncertainties in them with responses (output) of the given computational model. We use two different methodologies to approach the problem. In the first methodology we use sampling-based uncertainty propagation with first order error analysis. In the other approach we place emphasis on the use of Percentile-Based Optimization (PBO). The NASA Langley MUQC’s subproblem A is developed in such a way that both aleatory and epistemic uncertainties need to be managed. The challenge problem classifies each uncertain parameter as belonging to one the following three types: (i) An aleatory uncertainty modeled as a random variable. It has a fixed functional form and known coefficients. This uncertainty cannot be reduced. (ii) An epistemic uncertainty modeled as a fixed but poorly known physical quantity that lies within a given interval. This uncertainty is reducible. (iii) A parameter might be aleatory but sufficient data might not be available to adequately model it as a single random variable. For example, the parameters of a normal variable, e.g., the mean and standard deviation, might not be precisely known but could be assumed to lie within some intervals. It results in a distributional p-box having the physical parameter with an aleatory uncertainty, but the parameters prescribing its mathematical model are subjected to epistemic uncertainties. Each of the parameters of the random variable is an unknown element of a known interval. This uncertainty is reducible. From the study, it is observed that due to practical limitations or computational expense, the sampling is not exhaustive in sampling-based methodology. That is why the sampling-based methodology has high probability of underestimating the output bounds. Therefore, an optimization-based strategy to convert uncertainty described by interval data into a probabilistic framework is necessary. This is achieved in this study by using PBO.Keywords: aleatory uncertainty, epistemic uncertainty, first order error analysis, uncertainty quantification, percentile-based optimization
Procedia PDF Downloads 240220 Detailed Analysis of Multi-Mode Optical Fiber Infrastructures for Data Centers
Authors: Matej Komanec, Jan Bohata, Stanislav Zvanovec, Tomas Nemecek, Jan Broucek, Josef Beran
Abstract:
With the exponential growth of social networks, video streaming and increasing demands on data rates, the number of newly built data centers rises proportionately. The data centers, however, have to adjust to the rapidly increased amount of data that has to be processed. For this purpose, multi-mode (MM) fiber based infrastructures are often employed. It stems from the fact, the connections in data centers are typically realized within a short distance, and the application of MM fibers and components considerably reduces costs. On the other hand, the usage of MM components brings specific requirements for installation service conditions. Moreover, it has to be taken into account that MM fiber components have a higher production tolerance for parameters like core and cladding diameters, eccentricity, etc. Due to the high demands for the reliability of data center components, the determination of properly excited optical field inside the MM fiber core belongs to the key parameters while designing such an MM optical system architecture. Appropriately excited mode field of the MM fiber provides optimal power budget in connections, leads to the decrease of insertion losses (IL) and achieves effective modal bandwidth (EMB). The main parameter, in this case, is the encircled flux (EF), which should be properly defined for variable optical sources and consequent different mode-field distribution. In this paper, we present detailed investigation and measurements of the mode field distribution for short MM links purposed in particular for data centers with the emphasis on reliability and safety. These measurements are essential for large MM network design. The various scenarios, containing different fibers and connectors, were tested in terms of IL and mode-field distribution to reveal potential challenges. Furthermore, we focused on estimation of particular defects and errors, which can realistically occur like eccentricity, connector shifting or dust, were simulated and measured, and their dependence to EF statistics and functionality of data center infrastructure was evaluated. The experimental tests were performed at two wavelengths, commonly used in MM networks, of 850 nm and 1310 nm to verify EF statistics. Finally, we provide recommendations for data center systems and networks, using OM3 and OM4 MM fiber connections.Keywords: optical fiber, multi-mode, data centers, encircled flux
Procedia PDF Downloads 375219 Techno-Economic Assessment of Distributed Heat Pumps Integration within a Swedish Neighborhood: A Cosimulation Approach
Authors: Monica Arnaudo, Monika Topel, Bjorn Laumert
Abstract:
Within the Swedish context, the current trend of relatively low electricity prices promotes the electrification of the energy infrastructure. The residential heating sector takes part in this transition by proposing a switch from a centralized district heating system towards a distributed heat pumps-based setting. When it comes to urban environments, two issues arise. The first, seen from an electricity-sector perspective, is related to the fact that existing networks are limited with regards to their installed capacities. Additional electric loads, such as heat pumps, can cause severe overloads on crucial network elements. The second, seen from a heating-sector perspective, has to do with the fact that the indoor comfort conditions can become difficult to handle when the operation of the heat pumps is limited by a risk of overloading on the distribution grid. Furthermore, the uncertainty of the electricity market prices in the future introduces an additional variable. This study aims at assessing the extent to which distributed heat pumps can penetrate an existing heat energy network while respecting the technical limitations of the electricity grid and the thermal comfort levels in the buildings. In order to account for the multi-disciplinary nature of this research question, a cosimulation modeling approach was adopted. In this way, each energy technology is modeled in its customized simulation environment. As part of the cosimulation methodology: a steady-state power flow analysis in pandapower was used for modeling the electrical distribution grid, a thermal balance model of a reference building was implemented in EnergyPlus to account for space heating and a fluid-cycle model of a heat pump was implemented in JModelica to account for the actual heating technology. With the models set in place, different scenarios based on forecasted electricity market prices were developed both for present and future conditions of Hammarby Sjöstad, a neighborhood located in the south-east of Stockholm (Sweden). For each scenario, the technical and the comfort conditions were assessed. Additionally, the average cost of heat generation was estimated in terms of levelized cost of heat. This indicator enables a techno-economic comparison study among the different scenarios. In order to evaluate the levelized cost of heat, a yearly performance simulation of the energy infrastructure was implemented. The scenarios related to the current electricity prices show that distributed heat pumps can replace the district heating system by covering up to 30% of the heating demand. By lowering of 2°C, the minimum accepted indoor temperature of the apartments, this level of penetration can increase up to 40%. Within the future scenarios, if the electricity prices will increase, as most likely expected within the next decade, the penetration of distributed heat pumps can be limited to 15%. In terms of levelized cost of heat, a residential heat pump technology becomes competitive only within a scenario of decreasing electricity prices. In this case, a district heating system is characterized by an average cost of heat generation 7% higher compared to a distributed heat pumps option.Keywords: cosimulation, distributed heat pumps, district heating, electrical distribution grid, integrated energy systems
Procedia PDF Downloads 150218 Strategy and Mechanism for Intercepting Unpredictable Moving Targets in the Blue-Tailed Damselfly (Ischnura elegans)
Authors: Ziv Kassner, Gal Ribak
Abstract:
Members of the Odonata order (dragonflies and damselflies) stand out for their maneuverability and superb flight control, which allow them to catch flying prey in the air. These outstanding aerial abilities were fine-tuned during millions of years of an evolutionary arms race between Odonata and their prey, providing an attractive research model for studying the relationship between sensory input – and aerodynamic output in a flying insect. The ability to catch a maneuvering target in air is interesting not just for insect behavioral ecology and neuroethology but also for designing small and efficient robotic air vehicles. While the aerial prey interception of dragonflies (suborder: Anisoptera) have been studied before, little is known about how damselflies (suborder: Zygoptera) intercept prey. Here, high-speed cameras (filming at 1000 frames per second) were used to explore how damselflies catch unpredictable targets that move through air. Blue-tailed damselflies - Ischnura elegans (family: Coenagrionidae) were introduced to a flight arena and filmed while landing on moving targets that were oscillated harmonically. The insects succeeded in capturing targets that were moved with an amplitude of 6 cm and frequencies of 0-2.5 Hz (fastest mean target speed of 0.3 m s⁻¹) and targets that were moved in 1 Hz (an average speed of 0.3 m s⁻¹) but with an amplitude of 15 cm. To land on stationary or slow targets, damselflies either flew directly to the target, or flew sideways, up to a point in which the target was fixed in the center of the field of view, followed by direct flight path towards the target. As the target moved in increased frequency, damselflies demonstrated an ability to track the targets while flying sideways and minimizing the changes of their body direction on the yaw axis. This was likely an attempt to keep the targets at the center of the visual field while minimizing rotational optic flow of the surrounding visual panorama. Stabilizing rotational optic flow helps in estimation of the velocity and distance of the target. These results illustrate how dynamic visual information is used by damselflies to guide them towards a maneuvering target, enabling the superb aerial hunting abilities of these insects. They also exemplifies the plasticity of the damselfly flight apparatus which enables flight in any direction, irrespective of the direction of the body.Keywords: bio-mechanics, insect flight, target fixation, tracking and interception
Procedia PDF Downloads 152217 Impact of Interface Soil Layer on Groundwater Aquifer Behaviour
Authors: Hayder H. Kareem, Shunqi Pan
Abstract:
The geological environment where the groundwater is collected represents the most important element that affects the behaviour of groundwater aquifer. As groundwater is a worldwide vital resource, it requires knowing the parameters that affect this source accurately so that the conceptualized mathematical models would be acceptable to the broadest ranges. Therefore, groundwater models have recently become an effective and efficient tool to investigate groundwater aquifer behaviours. Groundwater aquifer may contain aquitards, aquicludes, or interfaces within its geological formations. Aquitards and aquicludes have geological formations that forced the modellers to include those formations within the conceptualized groundwater models, while interfaces are commonly neglected from the conceptualization process because the modellers believe that the interface has no effect on aquifer behaviour. The current research highlights the impact of an interface existing in a real unconfined groundwater aquifer called Dibdibba, located in Al-Najaf City, Iraq where it has a river called the Euphrates River that passes through the eastern part of this city. Dibdibba groundwater aquifer consists of two types of soil layers separated by an interface soil layer. A groundwater model is built for Al-Najaf City to explore the impact of this interface. Calibration process is done using PEST 'Parameter ESTimation' approach and the best Dibdibba groundwater model is obtained. When the soil interface is conceptualized, results show that the groundwater tables are significantly affected by that interface through appearing dry areas of 56.24 km² and 6.16 km² in the upper and lower layers of the aquifer, respectively. The Euphrates River will also leak water into the groundwater aquifer of 7359 m³/day. While these results are changed when the soil interface is neglected where the dry area became 0.16 km², the Euphrates River leakage became 6334 m³/day. In addition, the conceptualized models (with and without interface) reveal different responses for the change in the recharge rates applied on the aquifer through the uncertainty analysis test. The aquifer of Dibdibba in Al-Najaf City shows a slight deficit in the amount of water supplied by the current pumping scheme and also notices that the Euphrates River suffers from stresses applied to the aquifer. Ultimately, this study shows a crucial need to represent the interface soil layer in model conceptualization to be the intended and future predicted behaviours more reliable for consideration purposes.Keywords: Al-Najaf City, groundwater aquifer behaviour, groundwater modelling, interface soil layer, Visual MODFLOW
Procedia PDF Downloads 183216 Towards End-To-End Disease Prediction from Raw Metagenomic Data
Authors: Maxence Queyrel, Edi Prifti, Alexandre Templier, Jean-Daniel Zucker
Abstract:
Analysis of the human microbiome using metagenomic sequencing data has demonstrated high ability in discriminating various human diseases. Raw metagenomic sequencing data require multiple complex and computationally heavy bioinformatics steps prior to data analysis. Such data contain millions of short sequences read from the fragmented DNA sequences and stored as fastq files. Conventional processing pipelines consist in multiple steps including quality control, filtering, alignment of sequences against genomic catalogs (genes, species, taxonomic levels, functional pathways, etc.). These pipelines are complex to use, time consuming and rely on a large number of parameters that often provide variability and impact the estimation of the microbiome elements. Training Deep Neural Networks directly from raw sequencing data is a promising approach to bypass some of the challenges associated with mainstream bioinformatics pipelines. Most of these methods use the concept of word and sentence embeddings that create a meaningful and numerical representation of DNA sequences, while extracting features and reducing the dimensionality of the data. In this paper we present an end-to-end approach that classifies patients into disease groups directly from raw metagenomic reads: metagenome2vec. This approach is composed of four steps (i) generating a vocabulary of k-mers and learning their numerical embeddings; (ii) learning DNA sequence (read) embeddings; (iii) identifying the genome from which the sequence is most likely to come and (iv) training a multiple instance learning classifier which predicts the phenotype based on the vector representation of the raw data. An attention mechanism is applied in the network so that the model can be interpreted, assigning a weight to the influence of the prediction for each genome. Using two public real-life data-sets as well a simulated one, we demonstrated that this original approach reaches high performance, comparable with the state-of-the-art methods applied directly on processed data though mainstream bioinformatics workflows. These results are encouraging for this proof of concept work. We believe that with further dedication, the DNN models have the potential to surpass mainstream bioinformatics workflows in disease classification tasks.Keywords: deep learning, disease prediction, end-to-end machine learning, metagenomics, multiple instance learning, precision medicine
Procedia PDF Downloads 125215 Flood Mapping Using Height above the Nearest Drainage Model: A Case Study in Fredericton, NB, Canada
Authors: Morteza Esfandiari, Shabnam Jabari, Heather MacGrath, David Coleman
Abstract:
Flood is a severe issue in different places in the world as well as the city of Fredericton, New Brunswick, Canada. The downtown area of Fredericton is close to the Saint John River, which is susceptible to flood around May every year. Recently, the frequency of flooding seems to be increased, especially after the fact that the downtown area and surrounding urban/agricultural lands got flooded in two consecutive years in 2018 and 2019. In order to have an explicit vision of flood span and damage to affected areas, it is necessary to use either flood inundation modelling or satellite data. Due to contingent availability and weather dependency of optical satellites, and limited existing data for the high cost of hydrodynamic models, it is not always feasible to rely on these sources of data to generate quality flood maps after or during the catastrophe. Height Above the Nearest Drainage (HAND), a state-of-the-art topo-hydrological index, normalizes the height of a basin based on the relative elevation along with the stream network and specifies the gravitational or the relative drainage potential of an area. HAND is a relative height difference between the stream network and each cell on a Digital Terrain Model (DTM). The stream layer is provided through a multi-step, time-consuming process which does not always result in an optimal representation of the river centerline depending on the topographic complexity of that region. HAND is used in numerous case studies with quite acceptable and sometimes unexpected results because of natural and human-made features on the surface of the earth. Some of these features might cause a disturbance in the generated model, and consequently, the model might not be able to predict the flow simulation accurately. We propose to include a previously existing stream layer generated by the province of New Brunswick and benefit from culvert maps to improve the water flow simulation and accordingly the accuracy of HAND model. By considering these parameters in our processing, we were able to increase the accuracy of the model from nearly 74% to almost 92%. The improved model can be used for generating highly accurate flood maps, which is necessary for future urban planning and flood damage estimation without any need for satellite imagery or hydrodynamic computations.Keywords: HAND, DTM, rapid floodplain, simplified conceptual models
Procedia PDF Downloads 151214 Artificial Membrane Comparison for Skin Permeation in Skin PAMPA
Authors: Aurea C. L. Lacerda, Paulo R. H. Moreno, Bruna M. P. Vianna, Cristina H. R. Serra, Airton Martin, André R. Baby, Vladi O. Consiglieri, Telma M. Kaneko
Abstract:
The modified Franz cell is the most widely used model for in vitro permeation studies, however it still presents some disadvantages. Thus, some alternative methods have been developed such as Skin PAMPA, which is a bio- artificial membrane that has been applied for skin penetration estimation of xenobiotics based on HT permeability model consisting. Skin PAMPA greatest advantage is to carry out more tests, in a fast and inexpensive way. The membrane system mimics the stratum corneum characteristics, which is the primary skin barrier. The barrier properties are given by corneocytes embedded in a multilamellar lipid matrix. This layer is the main penetration route through the paracellular permeation pathway and it consists of a mixture of cholesterol, ceramides, and fatty acids as the dominant components. However, there is no consensus on the membrane composition. The objective of this work was to compare the performance among different bio-artificial membranes for studying the permeation in skin PAMPA system. Material and methods: In order to mimetize the lipid composition`s present in the human stratum corneum six membranes were developed. The membrane composition was equimolar mixture of cholesterol, ceramides 1-O-C18:1, C22, and C20, plus fatty acids C20 and C24. The membrane integrity assay was based on the transport of Brilliant Cresyl Blue, which has a low permeability; and Lucifer Yellow with very poor permeability and should effectively be completely rejected. The membrane characterization was performed using Confocal Laser Raman Spectroscopy, using stabilized laser at 785 nm with 10 second integration time and 2 accumulations. The membrane behaviour results on the PAMPA system were statistically evaluated and all of the compositions have shown integrity and permeability. The confocal Raman spectra were obtained in the region of 800-1200 cm-1 that is associated with the C-C stretches of the carbon scaffold from the stratum corneum lipids showed similar pattern for all the membranes. The ceramides, long chain fatty acids and cholesterol in equimolar ratio permitted to obtain lipid mixtures with self-organization capability, similar to that occurring into the stratum corneum. Conclusion: The artificial biological membranes studied for Skin PAMPA showed to be similar and with comparable properties to the stratum corneum.Keywords: bio-artificial membranes, comparison, confocal Raman, skin PAMPA
Procedia PDF Downloads 509213 Multiperson Drone Control with Seamless Pilot Switching Using Onboard Camera and Openpose Real-Time Keypoint Detection
Authors: Evan Lowhorn, Rocio Alba-Flores
Abstract:
Traditional classification Convolutional Neural Networks (CNN) attempt to classify an image in its entirety. This becomes problematic when trying to perform classification with a drone’s camera in real-time due to unpredictable backgrounds. Object detectors with bounding boxes can be used to isolate individuals and other items, but the original backgrounds remain within these boxes. These basic detectors have been regularly used to determine what type of object an item is, such as “person” or “dog.” Recent advancement in computer vision, particularly with human imaging, is keypoint detection. Human keypoint detection goes beyond bounding boxes to fully isolate humans and plot points, or Regions of Interest (ROI), on their bodies within an image. ROIs can include shoulders, elbows, knees, heads, etc. These points can then be related to each other and used in deep learning methods such as pose estimation. For drone control based on human motions, poses, or signals using the onboard camera, it is important to have a simple method for pilot identification among multiple individuals while also giving the pilot fine control options for the drone. To achieve this, the OpenPose keypoint detection network was used with body and hand keypoint detection enabled. OpenPose supports the ability to combine multiple keypoint detection methods in real-time with a single network. Body keypoint detection allows simple poses to act as the pilot identifier. The hand keypoint detection with ROIs for each finger can then offer a greater variety of signal options for the pilot once identified. For this work, the individual must raise their non-control arm to be identified as the operator and send commands with the hand on their other arm. The drone ignores all other individuals in the onboard camera feed until the current operator lowers their non-control arm. When another individual wish to operate the drone, they simply raise their arm once the current operator relinquishes control, and then they can begin controlling the drone with their other hand. This is all performed mid-flight with no landing or script editing required. When using a desktop with a discrete NVIDIA GPU, the drone’s 2.4 GHz Wi-Fi connection combined with OpenPose restrictions to only body and hand allows this control method to perform as intended while maintaining the responsiveness required for practical use.Keywords: computer vision, drone control, keypoint detection, openpose
Procedia PDF Downloads 184212 Design and Evaluation of a Prototype for Non-Invasive Screening of Diabetes – Skin Impedance Technique
Authors: Pavana Basavakumar, Devadas Bhat
Abstract:
Diabetes is a disease which often goes undiagnosed until its secondary effects are noticed. Early detection of the disease is necessary to avoid serious consequences which could lead to the death of the patient. Conventional invasive tests for screening of diabetes are mostly painful, time consuming and expensive. There’s also a risk of infection involved, therefore it is very essential to develop non-invasive methods to screen and estimate the level of blood glucose. Extensive research is going on with this perspective, involving various techniques that explore optical, electrical, chemical and thermal properties of the human body that directly or indirectly depend on the blood glucose concentration. Thus, non-invasive blood glucose monitoring has grown into a vast field of research. In this project, an attempt was made to device a prototype for screening of diabetes by measuring electrical impedance of the skin and building a model to predict a patient’s condition based on the measured impedance. The prototype developed, passes a negligible amount of constant current (0.5mA) across a subject’s index finger through tetra polar silver electrodes and measures output voltage across a wide range of frequencies (10 KHz – 4 MHz). The measured voltage is proportional to the impedance of the skin. The impedance was acquired in real-time for further analysis. Study was conducted on over 75 subjects with permission from the institutional ethics committee, along with impedance, subject’s blood glucose values were also noted, using conventional method. Nonlinear regression analysis was performed on the features extracted from the impedance data to obtain a model that predicts blood glucose values for a given set of features. When the predicted data was depicted on Clarke’s Error Grid, only 58% of the values predicted were clinically acceptable. Since the objective of the project was to screen diabetes and not actual estimation of blood glucose, the data was classified into three classes ‘NORMAL FASTING’,’NORMAL POSTPRANDIAL’ and ‘HIGH’ using linear Support Vector Machine (SVM). Classification accuracy obtained was 91.4%. The developed prototype was economical, fast and pain free. Thus, it can be used for mass screening of diabetes.Keywords: Clarke’s error grid, electrical impedance of skin, linear SVM, nonlinear regression, non-invasive blood glucose monitoring, screening device for diabetes
Procedia PDF Downloads 325211 Evaluation of Antioxidant Activity and Total Phenolic Content of Lens Esculenta Moench, Seeds
Authors: Vivek Kumar Gupta, Kripi Vohra, Monika Gupta
Abstract:
Pulses have been a vital ingredient of the balanced human diet in India. Lentil (Lens culinaris Medikus or Lens esculenta Moench.) is a common legume known since biblical times. Lentil seeds, with or without hulls, are cooked as dhal and this has been the main dish for millennia in the South Asian region. Oxidative stress can damage lipids, proteins, enzymes, carbohydrates and DNA in cells and tissues, resulting in membrane damage, fragmentation or random cross linking of molecules like DNA, enzymes and structural proteins and even lead to cell death induced by DNA fragmentation and lipid peroxidation. These consequences of oxidative stress construct the molecular basis in the development of cancer, neurodegenerative disorders, cardiovascular diseases, diabetes and autoimmune. The aim of the present work is to assess the antioxidant potential of the peteroleum ether, acetone, methanol and water extract of the Lens esculenta seeds. In vitro antioxidant assessment of the extracts was carried out using 1,1-diphenyl-2-picryl hydrazyl (DPPH) radical scavenging activity, hydroxyl radical scavenging activity, reducing power assay. The quantitative estimation of total phenolic content, total flavonoid content in extracts and in plant material, total saponin content, total alkaloid content, crude fibre content, total volatile content, fat content and mucilage content in drug material was also carried out. Though all the extracts exhibited dose dependent reducing power activity the acetone extract was found to possess significant hydrogen donating ability in DPPH (45.83%-93.13%) and hydroxyl radical scavenging system (28.7%-46.41%) than the peteroleum ether, methanol and water extracts. Total phenolic content in the acetone and methanol extract was found to be 608 and 188 mg gallic acid equivalent of phenol/g of sample respectively. Total flavonoid content of acetone and methanol extract was found to be 128 and 30.6 mg quercetin equivalent/g of sample respectively. It is evident that acetone extract of Lentil seeds possess high levels of polyphenolics and flavonoids that could be utilized as antioxidants and neutraceuticals.Keywords: antioxidant, flavanoids, Lens esculenta, polyphenols
Procedia PDF Downloads 484210 Artificial Neural Networks Application on Nusselt Number and Pressure Drop Prediction in Triangular Corrugated Plate Heat Exchanger
Authors: Hany Elsaid Fawaz Abdallah
Abstract:
This study presents a new artificial neural network(ANN) model to predict the Nusselt Number and pressure drop for the turbulent flow in a triangular corrugated plate heat exchanger for forced air and turbulent water flow. An experimental investigation was performed to create a new dataset for the Nusselt Number and pressure drop values in the following range of dimensionless parameters: The plate corrugation angles (from 0° to 60°), the Reynolds number (from 10000 to 40000), pitch to height ratio (from 1 to 4), and Prandtl number (from 0.7 to 200). Based on the ANN performance graph, the three-layer structure with {12-8-6} hidden neurons has been chosen. The training procedure includes back-propagation with the biases and weight adjustment, the evaluation of the loss function for the training and validation dataset and feed-forward propagation of the input parameters. The linear function was used at the output layer as the activation function, while for the hidden layers, the rectified linear unit activation function was utilized. In order to accelerate the ANN training, the loss function minimization may be achieved by the adaptive moment estimation algorithm (ADAM). The ‘‘MinMax’’ normalization approach was utilized to avoid the increase in the training time due to drastic differences in the loss function gradients with respect to the values of weights. Since the test dataset is not being used for the ANN training, a cross-validation technique is applied to the ANN network using the new data. Such procedure was repeated until loss function convergence was achieved or for 4000 epochs with a batch size of 200 points. The program code was written in Python 3.0 using open-source ANN libraries such as Scikit learn, TensorFlow and Keras libraries. The mean average percent error values of 9.4% for the Nusselt number and 8.2% for pressure drop for the ANN model have been achieved. Therefore, higher accuracy compared to the generalized correlations was achieved. The performance validation of the obtained model was based on a comparison of predicted data with the experimental results yielding excellent accuracy.Keywords: artificial neural networks, corrugated channel, heat transfer enhancement, Nusselt number, pressure drop, generalized correlations
Procedia PDF Downloads 87209 Estimation of Ribb Dam Catchment Sediment Yield and Reservoir Effective Life Using Soil and Water Assessment Tool Model and Empirical Methods
Authors: Getalem E. Haylia
Abstract:
The Ribb dam is one of the irrigation projects in the Upper Blue Nile basin, Ethiopia, to irrigate the Fogera plain. Reservoir sedimentation is a major problem because it reduces the useful reservoir capacity by the accumulation of sediments coming from the watersheds. Estimates of sediment yield are needed for studies of reservoir sedimentation and planning of soil and water conservation measures. The objective of this study was to simulate the Ribb dam catchment sediment yield using SWAT model and to estimate Ribb reservoir effective life according to trap efficiency methods. The Ribb dam catchment is found in North Western part of Ethiopia highlands, and it belongs to the upper Blue Nile and Lake Tana basins. Soil and Water Assessment Tool (SWAT) was selected to simulate flow and sediment yield in the Ribb dam catchment. The model sensitivity, calibration, and validation analysis at Ambo Bahir site were performed with Sequential Uncertainty Fitting (SUFI-2). The flow data at this site was obtained by transforming the Lower Ribb gauge station (2002-2013) flow data using Area Ratio Method. The sediment load was derived based on the sediment concentration yield curve of Ambo site. Stream flow results showed that the Nash-Sutcliffe efficiency coefficient (NSE) was 0.81 and the coefficient of determination (R²) was 0.86 in calibration period (2004-2010) and, 0.74 and 0.77 in validation period (2011-2013), respectively. Using the same periods, the NS and R² for the sediment load calibration were 0.85 and 0.79 and, for the validation, it became 0.83 and 0.78, respectively. The simulated average daily flow rate and sediment yield generated from Ribb dam watershed were 3.38 m³/s and 1772.96 tons/km²/yr, respectively. The effective life of Ribb reservoir was estimated using the developed empirical methods of the Brune (1953), Churchill (1948) and Brown (1958) methods and found to be 30, 38 and 29 years respectively. To conclude, massive sediment comes from the steep slope agricultural areas, and approximately 98-100% of this incoming annual sediment loads have been trapped by the Ribb reservoir. In Ribb catchment, as well as reservoir systematic and thorough consideration of technical, social, environmental, and catchment managements and practices should be made to lengthen the useful life of Ribb reservoir.Keywords: catchment, reservoir effective life, reservoir sedimentation, Ribb, sediment yield, SWAT model
Procedia PDF Downloads 187208 Investigating the Role of Supplier Involvement in the Design Process as an Approach for Enhancing Building Maintainability
Authors: Kamal Ahmed, Othman Ayman, Refat Mostafa
Abstract:
The post-construction phase represents a critical milestone in the project lifecycle. This is because design errors and omissions, as well as construction defects, are examined during this phase. The traditional procurement approaches that are commonly adopted in construction projects separate design from construction, which ultimately inhibits contractors, suppliers and other parties from providing the design team with constructive comments and feedback to improve the project design. As a result, a lack of considering maintainability aspects during the design process results in increasing maintenance and operation costs as well as reducing building performance. This research aims to investigate the role of Early Supplier Involvement (ESI) in the design process as an approach to enhancing building maintainability. In order to achieve this aim, a research methodology consisting of a literature review, case studies and a survey questionnaire was designed to accomplish four objectives. Firstly, a literature review was used to examine the concepts of building maintenance, maintainability, the design process and ESI. Secondly, three case studies were presented and analyzed to investigate the role of ESI in enhancing building maintainability during the design process. Thirdly, a survey questionnaire was conducted with a representative sample of Architectural Design Firms (ADFs) in Egypt to investigate their perception and application of ESI towards enhancing building maintainability during the design process. Finally, the research developed a framework to facilitate ESI in the design process in ADFs in Egypt. Data analysis showed that the ‘Difficulty of trusting external parties and sharing information with transparency’ was ranked the highest challenge of ESI in ADFs in Egypt, followed by ‘Legal competitive advantage restrictions’. Moreover, ‘Better estimation for operation and maintenance costs’ was ranked the highest contribution of ESI towards enhancing building maintainability, followed by ‘Reduce the number of operation and maintenance problems or reworks’. Finally, ‘Innovation, technical expertise, and competence’ was ranked the highest supplier’s selection criteria, while ‘paying consultation fees for offering advice and recommendations to the design team’ was ranked the highest form of supplier’s remuneration. The proposed framework represents a synthesis that is creative in thought and adds value to the knowledge in a manner that has not previously occurred.Keywords: maintenance, building maintainability, building life cycle cost (ICC), material supplier
Procedia PDF Downloads 47207 Assessment and Forecasting of the Impact of Negative Environmental Factors on Public Health
Authors: Nurlan Smagulov, Aiman Konkabayeva, Akerke Sadykova, Arailym Serik
Abstract:
Introduction. Adverse environmental factors do not immediately lead to pathological changes in the body. They can exert the growth of pre-pathology characterized by shifts in physiological, biochemical, immunological and other indicators of the body state. These disorders are unstable, reversible and indicative of body reactions. There is an opportunity to objectively judge the internal structure of the adaptive body reactions at the level of individual organs and systems. In order to obtain a stable response of the body to the chronic effects of unfavorable environmental factors of low intensity (compared to production environment factors), a time called the «lag time» is needed. The obtained results without considering this factor distort reality and, for the most part, cannot be a reliable statement of the main conclusions in any work. A technique is needed to reduce methodological errors and combine mathematical logic using statistical methods and a medical point of view, which ultimately will affect the obtained results and avoid a false correlation. Objective. Development of a methodology for assessing and predicting the environmental factors impact on the population health considering the «lag time.» Methods. Research objects: environmental and population morbidity indicators. The database on the environmental state was compiled from the monthly newsletters of Kazhydromet. Data on population morbidity were obtained from regional statistical yearbooks. When processing static data, a time interval (lag) was determined for each «argument-function» pair. That is the required interval, after which the harmful factor effect (argument) will fully manifest itself in the indicators of the organism's state (function). The lag value was determined by cross-correlation functions of arguments (environmental indicators) with functions (morbidity). Correlation coefficients (r) and their reliability (t), Fisher's criterion (F) and the influence share (R2) of the main factor (argument) per indicator (function) were calculated as a percentage. Results. The ecological situation of an industrially developed region has an impact on health indicators, but it has some nuances. Fundamentally opposite results were obtained in the mathematical data processing, considering the «lag time». Namely, an expressed correlation was revealed after two databases (ecology-morbidity) shifted. For example, the lag period was 4 years for dust concentration, general morbidity, and 3 years – for childhood morbidity. These periods accounted for the maximum values of the correlation coefficients and the largest percentage of the influencing factor. Similar results were observed in relation to the concentration of soot, dioxide, etc. The comprehensive statistical processing using multiple correlation-regression variance analysis confirms the correctness of the above statement. This method provided the integrated approach to predicting the degree of pollution of the main environmental components to identify the most dangerous combinations of concentrations of leading negative environmental factors. Conclusion. The method of assessing the «environment-public health» system (considering the «lag time») is qualitatively different from the traditional (without considering the «lag time»). The results significantly differ and are more amenable to a logical explanation of the obtained dependencies. The method allows presenting the quantitative and qualitative dependence in a different way within the «environment-public health» system.Keywords: ecology, morbidity, population, lag time
Procedia PDF Downloads 81206 Academic Achievement in Argentinean College Students: Major Findings in Psychological Assessment
Authors: F. Uriel, M. M. Fernandez Liporace
Abstract:
In the last decade, academic achievement in higher education has become a topic of agenda in Argentina, regarding the high figures of adjustment problems, academic failure and dropout, and the low graduation rates in the context of massive classes and traditional teaching methods. Psychological variables, such as perceived social support, academic motivation and learning styles and strategies have much to offer since their measurement by tests allows a proper diagnose of their influence on academic achievement. Framed in a major research, several studies analysed multiple samples, totalizing 5135 students attending Argentinean public universities. The first goal was aimed at the identification of statistically significant differences in psychological variables -perceived social support, learning styles, learning strategies, and academic motivation- by age, gender, and degree of academic advance (freshmen versus sophomores). Thus, an inferential group differences study for each psychological dependent variable was developed by means of student’s T tests, given the features of data distribution. The second goal, aimed at examining associations between the four psychological variables on the one hand, and academic achievement on the other, was responded by correlational studies, calculating Pearson’s coefficients, employing grades as the quantitative indicator of academic achievement. The positive and significant results that were obtained led to the formulation of different predictive models of academic achievement which had to be tested in terms of adjustment and predictive power. These models took the four psychological variables above mentioned as predictors, using regression equations, examining predictors individually, in groups of two, and together, analysing indirect effects as well, and adding the degree of academic advance and gender, which had shown their importance within the first goal’s findings. The most relevant results were: first, gender showed no influence on any dependent variable. Second, only good achievers perceived high social support from teachers, and male students were prone to perceive less social support. Third, freshmen exhibited a pragmatic learning style, preferring unstructured environments, the use of examples and simultaneous-visual processing in learning, whereas sophomores manifest an assimilative learning style, choosing sequential and analytic processing modes. Despite these features, freshmen have to deal with abstract contents and sophomores, with practical learning situations due to study programs in force. Fifth, no differences in academic motivation were found between freshmen and sophomores. However, the latter employ a higher number of more efficient learning strategies. Sixth, freshmen low achievers lack intrinsic motivation. Seventh, models testing showed that social support, learning styles and academic motivation influence learning strategies, which affect academic achievement in freshmen, particularly males; only learning styles influence achievement in sophomores of both genders with direct effects. These findings led to conclude that educational psychologists, education specialists, teachers, and universities must plan urgent and major changes. These must be applied in renewed and better study programs, syllabi and classes, as well as tutoring and training systems. Such developments should be targeted to the support and empowerment of students in their academic pathways, and therefore to the upgrade of learning quality, especially in the case of freshmen, male freshmen, and low achievers.Keywords: academic achievement, academic motivation, coping, learning strategies, learning styles, perceived social support
Procedia PDF Downloads 122205 An Optimal Control Method for Reconstruction of Topography in Dam-Break Flows
Authors: Alia Alghosoun, Nabil El Moçayd, Mohammed Seaid
Abstract:
Modeling dam-break flows over non-flat beds requires an accurate representation of the topography which is the main source of uncertainty in the model. Therefore, developing robust and accurate techniques for reconstructing topography in this class of problems would reduce the uncertainty in the flow system. In many hydraulic applications, experimental techniques have been widely used to measure the bed topography. In practice, experimental work in hydraulics may be very demanding in both time and cost. Meanwhile, computational hydraulics have served as an alternative for laboratory and field experiments. Unlike the forward problem, the inverse problem is used to identify the bed parameters from the given experimental data. In this case, the shallow water equations used for modeling the hydraulics need to be rearranged in a way that the model parameters can be evaluated from measured data. However, this approach is not always possible and it suffers from stability restrictions. In the present work, we propose an adaptive optimal control technique to numerically identify the underlying bed topography from a given set of free-surface observation data. In this approach, a minimization function is defined to iteratively determine the model parameters. The proposed technique can be interpreted as a fractional-stage scheme. In the first stage, the forward problem is solved to determine the measurable parameters from known data. In the second stage, the adaptive control Ensemble Kalman Filter is implemented to combine the optimality of observation data in order to obtain the accurate estimation of the topography. The main features of this method are on one hand, the ability to solve for different complex geometries with no need for any rearrangements in the original model to rewrite it in an explicit form. On the other hand, its achievement of strong stability for simulations of flows in different regimes containing shocks or discontinuities over any geometry. Numerical results are presented for a dam-break flow problem over non-flat bed using different solvers for the shallow water equations. The robustness of the proposed method is investigated using different numbers of loops, sensitivity parameters, initial samples and location of observations. The obtained results demonstrate high reliability and accuracy of the proposed techniques.Keywords: erodible beds, finite element method, finite volume method, nonlinear elasticity, shallow water equations, stresses in soil
Procedia PDF Downloads 130204 Referencing Anna: Findings From Eye-tracking During Dutch Pronoun Resolution
Authors: Robin Devillers, Chantal van Dijk
Abstract:
Children face ambiguities in everyday language use. Particularly ambiguity in pronoun resolution can be challenging, whereas adults can rapidly identify the antecedent of the mentioned pronoun. Two main factors underlie this process, namely the accessibility of the referent and the syntactic cues of the pronoun. After 200ms, adults have converged the accessibility and the syntactic constraints, while relieving cognitive effort by considering contextual cues. As children are still developing their cognitive capacity, they are not able yet to simultaneously assess and integrate accessibility, contextual cues and syntactic information. As such, they fail to identify the correct referent and possibly fixate more on the competitor in comparison to adults. In this study, Dutch while-clauses were used to investigate the interpretation of pronouns by children. The aim is to a) examine the extent to which 7-10 year old children are able to utilise discourse and syntactic information during online and offline sentence processing and b) analyse the contribution of individual factors, including age, working memory, condition and vocabulary. Adult and child participants are presented with filler-items and while-clauses, and the latter follows a particular structure: ‘Anna and Sophie are sitting in the library. While Anna is reading a book, she is taking a sip of water.’ This sentence illustrates the ambiguous situation, as it is unclear whether ‘she’ refers to Anna or Sophie. In the unambiguous situation, either Anna or Sophie would be substituted by a boy, such as ‘Peter’. The pronoun in the second sentence will unambiguously refer to one of the characters due to the syntactic constraints of the pronoun. Children’s and adults’ responses were measured by means of a visual world paradigm. This paradigm consisted of two characters, of which one was the referent (the target) and the other was the competitor. A sentence was presented and followed by a question, which required the participant to choose which character was the referent. Subsequently, this paradigm yields an online (fixations) and offline (accuracy) score. These findings will be analysed using Generalised Additive Mixed Models, which allow for a thorough estimation of the individual variables. These findings will contribute to the scientific literature in several ways; firstly, the use of while-clauses has not been studied much and it’s processing has not yet been identified. Moreover, online pronoun resolution has not been investigated much in both children and adults, and therefore, this study will contribute to adults and child’s pronoun resolution literature. Lastly, pronoun resolution has not been studied yet in Dutch and as such, this study adds to the languagesKeywords: pronouns, online language processing, Dutch, eye-tracking, first language acquisition, language development
Procedia PDF Downloads 100203 Fracture Toughness Characterizations of Single Edge Notch (SENB) Testing Using DIC System
Authors: Amr Mohamadien, Ali Imanpour, Sylvester Agbo, Nader Yoosef-Ghodsi, Samer Adeeb
Abstract:
The fracture toughness resistance curve (e.g., J-R curve and crack tip opening displacement (CTOD) or δ-R curve) is important in facilitating strain-based design and integrity assessment of oil and gas pipelines. This paper aims to present laboratory experimental data to characterize the fracture behavior of pipeline steel. The influential parameters associated with the fracture of API 5L X52 pipeline steel, including different initial crack sizes, were experimentally investigated for a single notch edge bend (SENB). A total of 9 small-scale specimens with different crack length to specimen depth ratios were conducted and tested using single edge notch bending (SENB). ASTM E1820 and BS7448 provide testing procedures to construct the fracture resistance curve (Load-CTOD, CTOD-R, or J-R) from test results. However, these procedures are limited by standard specimens’ dimensions, displacement gauges, and calibration curves. To overcome these limitations, this paper presents the use of small-scale specimens and a 3D-digital image correlation (DIC) system to extract the parameters required for fracture toughness estimation. Fracture resistance curve parameters in terms of crack mouth open displacement (CMOD), crack tip opening displacement (CTOD), and crack growth length (∆a) were carried out from test results by utilizing the DIC system, and an improved regression fitting resistance function (CTOD Vs. crack growth), or (J-integral Vs. crack growth) that is dependent on a variety of initial crack sizes was constructed and presented. The obtained results were compared to the available results of the classical physical measurement techniques, and acceptable matchings were observed. Moreover, a case study was implemented to estimate the maximum strain value that initiates the stable crack growth. This might be of interest to developing more accurate strain-based damage models. The results of laboratory testing in this study offer a valuable database to develop and validate damage models that are able to predict crack propagation of pipeline steel, accounting for the influential parameters associated with fracture toughness.Keywords: fracture toughness, crack propagation in pipeline steels, CTOD-R, strain-based damage model
Procedia PDF Downloads 63202 Childhood Sensory Sensitivity: A Potential Precursor to Borderline Personality Disorder
Authors: Valerie Porr, Sydney A. DeCaro
Abstract:
TARA for borderline personality disorder (BPD), an education and advocacy organization, helps families to compassionately and effectively deal with troubling BPD behaviors. Our psychoeducational programs focus on understanding underlying neurobiological features of BPD and evidence-based methodology integrating dialectical behavior therapy (DBT) and mentalization based therapy (MBT,) clarifying the inherent misunderstanding of BPD behaviors and improving family communication. TARA4BPD conducts online surveys, workshops, and topical webinars. For over 25 years, we have collected data from BPD helpline callers. This data drew our attention to particular childhood idiosyncrasies that seem to characterize many of the children who later met the criteria for BPD. The idiosyncrasies we observed, heightened sensory sensitivity and hypervigilance, were included in Adolf Stern’s 1938 definition of “Borderline.” This aspect of BPD has not been prioritized by personality disorder researchers, presently focused on emotion processing and social cognition in BPD. Parents described sleep reversal problems in infants who, early on, seem to exhibit dysregulation in circadian rhythm. Families describe children as supersensitive to sensory sensations, such as specific sounds, heightened sense of smell, taste, textures of foods, and an inability to tolerate various fabrics textures (i.e., seams in socks). They also exhibit high sensitivity to particular words and voice tones. Many have alexithymia and dyslexia. These children are either hypo- or hypersensitive to sensory sensations, including pain. Many suffer from fibromyalgia. BPD reactions to pain have been studied (C. Schmahl) and confirm the existence of hyper and hypo-reactions to pain stimuli in people with BPD. To date, there is little or no data regarding what comprises a normative range of sensitivity in infants and children. Many parents reported that their children were tested or treated for sensory processing disorder (SPD), learning disorders, and ADHD. SPD is not included in the DSM and is treated by occupational therapists. The overwhelming anecdotal data from thousands of parents of children who later met criteria for BPD led TARA4BPD to develop a sensitivity survey to develop evidence of the possible role of early sensory perception problems as a pre-cursor to BPD, hopefully initiating new directions in BPD research. At present, the research community seems unaware of the role supersensory sensitivity might play as an early indicator of BPD. Parents' observations of childhood sensitivity obtained through family interviews and results of an extensive online survey on sensory responses across various ages of development will be presented. People with BPD suffer from a sense of isolation and otherness that often results in later interpersonal difficulties. Early identification of supersensitive children while brain circuits are developing might decrease the development of social interaction deficits such as rejection sensitivity, self-referential processes, and negative bias, hallmarks of BPD, ultimately minimizing the maladaptive methods of coping with distress that characterizes BPD. Family experiences are an untapped resource for BPD research. It is hoped that this data will give family observations the critical credibility to inform future treatment and research directions.Keywords: alexithymia, dyslexia, hypersensitivity, sensory processing disorder
Procedia PDF Downloads 201201 Factors Affecting Treatment Resilience in Patients with Oesophago-Gastric Cancers Undergoing Palliative Chemotherapy: A Literature Review
Authors: Kiran Datta, Daniella Holland-Hart, Anthony Byrne
Abstract:
Introduction: Oesophago-gastric (OG) cancers are the fifth commonest in the UK, accounting for over 12,000 deaths each year. Most patients will present at later stages of the disease, with only 21% of patients with stage 4 disease surviving longer than a year. As a result, many patients are unsuitable for curative surgery and instead receive palliative treatment to improve prognosis and symptom burden. However, palliative chemotherapy can result in significant toxicity: almost half of the patients are unable to complete their chemotherapy regimen, with this proportion rising significantly in older and frailer patients. In addition, clinical trials often exclude older and frailer patients due to strict inclusion criteria, meaning there is limited evidence to guide which patients are most likely to benefit from palliative chemotherapy. Inappropriate chemotherapy administration is at odds with the goals of palliative treatment and care, which are to improve quality of life, and this also represents a significant resource expenditure. This literature review aimed to examine and appraise evidence regarding treatment resilience in order to guide clinicians in identifying the most suitable candidates for palliative chemotherapy. Factors influencing treatment resilience were assessed, as measured by completion rates, dose reductions, and toxicities. Methods: This literature review was conducted using rapid review methodology, utilising modified systematic methods. A literature search was performed across the MEDLINE, EMBASE, and Cochrane Library databases, with results limited to papers within the last 15 years and available in English. Key inclusion criteria included: 1) participants with either oesophageal, gastro-oesophageal junction, or gastric cancers; 2) patients treated with palliative chemotherapy; 3) available data evaluating the association between baseline participant characteristics and treatment resilience. Results: Of the 2326 papers returned, 11 reports of 10 studies were included in this review after excluding duplicates and irrelevant papers. Treatment resilience factors that were assessed included: age, performance status, frailty, inflammatory markers, and sarcopenia. Age was generally a poor predictor for how well patients would tolerate chemotherapy, while poor performance status was a better indicator of the need for dose reduction and treatment non-completion. Frailty was assessed across one cohort using multiple screening tools and was an effective marker of the risk of toxicity and the requirement for dose reduction. Inflammatory markers included lymphopenia and the Glasgow Prognostic Score, which assessed inflammation and hypoalbuminaemia. Although quick to obtain and interpret, these findings appeared less reliable due to the inclusion of patients treated with palliative radiotherapy. Sarcopenia and body composition were often associated with chemotherapy toxicity but not the rate of regimen completion. Conclusion: This review demonstrates that there are numerous measures that can estimate the ability of patients with oesophago-gastric cancer to tolerate palliative chemotherapy, and these should be incorporated into clinical assessments to promote personalised decision-making around treatment. Age should not be a barrier to receiving chemotherapy and older and frailer patients should be included in future clinical trials to better represent typical patients with oesophago-gastric cancers. Decisions regarding palliative treatment should be guided by these factors identified as well as patient preference.Keywords: frailty, oesophago-gastric cancer, palliative chemotherapy, treatment resilience
Procedia PDF Downloads 76200 Sustainability Impact Assessment of Construction Ecology to Engineering Systems and Climate Change
Authors: Moustafa Osman Mohammed
Abstract:
Construction industry, as one of the main contributor in depletion of natural resources, influences climate change. This paper discusses incremental and evolutionary development of the proposed models for optimization of a life-cycle analysis to explicit strategy for evaluation systems. The main categories are virtually irresistible for introducing uncertainties, uptake composite structure model (CSM) as environmental management systems (EMSs) in a practice science of evaluation small and medium-sized enterprises (SMEs). The model simplified complex systems to reflect nature systems’ input, output and outcomes mode influence “framework measures” and give a maximum likelihood estimation of how elements are simulated over the composite structure. The traditional knowledge of modeling is based on physical dynamic and static patterns regarding parameters influence environment. It unified methods to demonstrate how construction systems ecology interrelated from management prospective in procedure reflects the effect of the effects of engineering systems to ecology as ultimately unified technologies in extensive range beyond constructions impact so as, - energy systems. Sustainability broadens socioeconomic parameters to practice science that meets recovery performance, engineering reflects the generic control of protective systems. When the environmental model employed properly, management decision process in governments or corporations could address policy for accomplishment strategic plans precisely. The management and engineering limitation focuses on autocatalytic control as a close cellular system to naturally balance anthropogenic insertions or aggregation structure systems to pound equilibrium as steady stable conditions. Thereby, construction systems ecology incorporates engineering and management scheme, as a midpoint stage between biotic and abiotic components to predict constructions impact. The later outcomes’ theory of environmental obligation suggests either a procedures of method or technique that is achieved in sustainability impact of construction system ecology (SICSE), as a relative mitigation measure of deviation control, ultimately.Keywords: sustainability, environmental impact assessment, environemtal management, construction ecology
Procedia PDF Downloads 393199 A Double Ended AC Series Arc Fault Location Algorithm Based on Currents Estimation and a Fault Map Trace Generation
Authors: Edwin Calderon-Mendoza, Patrick Schweitzer, Serge Weber
Abstract:
Series arc faults appear frequently and unpredictably in low voltage distribution systems. Many methods have been developed to detect this type of faults and commercial protection systems such AFCI (arc fault circuit interrupter) have been used successfully in electrical networks to prevent damage and catastrophic incidents like fires. However, these devices do not allow series arc faults to be located on the line in operating mode. This paper presents a location algorithm for series arc fault in a low-voltage indoor power line in an AC 230 V-50Hz home network. The method is validated through simulations using the MATLAB software. The fault location method uses electrical parameters (resistance, inductance, capacitance, and conductance) of a 49 m indoor power line. The mathematical model of a series arc fault is based on the analysis of the V-I characteristics of the arc and consists basically of two antiparallel diodes and DC voltage sources. In a first step, the arc fault model is inserted at some different positions across the line which is modeled using lumped parameters. At both ends of the line, currents and voltages are recorded for each arc fault generation at different distances. In the second step, a fault map trace is created by using signature coefficients obtained from Kirchhoff equations which allow a virtual decoupling of the line’s mutual capacitance. Each signature coefficient obtained from the subtraction of estimated currents is calculated taking into account the Discrete Fast Fourier Transform of currents and voltages and also the fault distance value. These parameters are then substituted into Kirchhoff equations. In a third step, the same procedure described previously to calculate signature coefficients is employed but this time by considering hypothetical fault distances where the fault can appear. In this step the fault distance is unknown. The iterative calculus from Kirchhoff equations considering stepped variations of the fault distance entails the obtaining of a curve with a linear trend. Finally, the fault distance location is estimated at the intersection of two curves obtained in steps 2 and 3. The series arc fault model is validated by comparing current registered from simulation with real recorded currents. The model of the complete circuit is obtained for a 49m line with a resistive load. Also, 11 different arc fault positions are considered for the map trace generation. By carrying out the complete simulation, the performance of the method and the perspectives of the work will be presented.Keywords: indoor power line, fault location, fault map trace, series arc fault
Procedia PDF Downloads 137198 Intersubjectivity of Forensic Handwriting Analysis
Authors: Marta Nawrocka
Abstract:
In each of the legal proceedings, in which expert evidence is carried out, a major concern is the assessment of the evidential value of expert reports. Judicial institutions, while making decisions, rely heavily on the expert reports, because they usually do not possess 'special knowledge' from a certain fields of science which makes it impossible for them to verify the results presented in the processes. In handwriting studies, the standards of analysis are developed. They unify procedures used by experts in comparing signs and in constructing expert reports. However, the methods used by experts are usually of a qualitative nature. They rely on the application of knowledge and experience of expert and in effect give significant range of margin in the assessment. Moreover, the standards used by experts are still not very precise and the process of reaching the conclusions is poorly understood. The above-mentioned circumstances indicate that expert opinions in the field of handwriting analysis, for many reasons, may not be sufficiently reliable. It is assumed that this state of affairs has its source in a very low level of intersubjectivity of measuring scales and analysis procedures, which consist elements of this kind of analysis. Intersubjectivity is a feature of cognition which (in relation to methods) indicates the degree of consistency of results that different people receive using the same method. The higher the level of intersubjectivity is, the more reliable and credible the method can be considered. The aim of the conducted research was to determine the degree of intersubjectivity of the methods used by the experts from the scope of handwriting analysis. 30 experts took part in the study and each of them received two signatures, with varying degrees of readability, for analysis. Their task was to distinguish graphic characteristics in the signature, estimate the evidential value of the found characteristics and estimate the evidential value of the signature. The obtained results were compared with each other using the Alpha Krippendorff’s statistic, which numerically determines the degree of compatibility of the results (assessments) that different people receive under the same conditions using the same method. The estimation of the degree of compatibility of the experts' results for each of these tasks allowed to determine the degree of intersubjectivity of the studied method. The study showed that during the analysis, the experts identified different signature characteristics and attributed different evidential value to them. In this scope, intersubjectivity turned out to be low. In addition, it turned out that experts in various ways called and described the same characteristics, and the language used was often inconsistent and imprecise. Thus, significant differences have been noted on the basis of language and applied nomenclature. On the other hand, experts attributed a similar evidential value to the entire signature (set of characteristics), which indicates that in this range, they were relatively consistent.Keywords: forensic sciences experts, handwriting analysis, inter-rater reliability, reliability of methods
Procedia PDF Downloads 149