Search results for: Neural Processing Element (NPE)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7894

Search results for: Neural Processing Element (NPE)

1264 Modeling Engagement with Multimodal Multisensor Data: The Continuous Performance Test as an Objective Tool to Track Flow

Authors: Mohammad H. Taheri, David J. Brown, Nasser Sherkat

Abstract:

Engagement is one of the most important factors in determining successful outcomes and deep learning in students. Existing approaches to detect student engagement involve periodic human observations that are subject to inter-rater reliability. Our solution uses real-time multimodal multisensor data labeled by objective performance outcomes to infer the engagement of students. The study involves four students with a combined diagnosis of cerebral palsy and a learning disability who took part in a 3-month trial over 59 sessions. Multimodal multisensor data were collected while they participated in a continuous performance test. Eye gaze, electroencephalogram, body pose, and interaction data were used to create a model of student engagement through objective labeling from the continuous performance test outcomes. In order to achieve this, a type of continuous performance test is introduced, the Seek-X type. Nine features were extracted including high-level handpicked compound features. Using leave-one-out cross-validation, a series of different machine learning approaches were evaluated. Overall, the random forest classification approach achieved the best classification results. Using random forest, 93.3% classification for engagement and 42.9% accuracy for disengagement were achieved. We compared these results to outcomes from different models: AdaBoost, decision tree, k-Nearest Neighbor, naïve Bayes, neural network, and support vector machine. We showed that using a multisensor approach achieved higher accuracy than using features from any reduced set of sensors. We found that using high-level handpicked features can improve the classification accuracy in every sensor mode. Our approach is robust to both sensor fallout and occlusions. The single most important sensor feature to the classification of engagement and distraction was shown to be eye gaze. It has been shown that we can accurately predict the level of engagement of students with learning disabilities in a real-time approach that is not subject to inter-rater reliability, human observation or reliant on a single mode of sensor input. This will help teachers design interventions for a heterogeneous group of students, where teachers cannot possibly attend to each of their individual needs. Our approach can be used to identify those with the greatest learning challenges so that all students are supported to reach their full potential.

Keywords: affective computing in education, affect detection, continuous performance test, engagement, flow, HCI, interaction, learning disabilities, machine learning, multimodal, multisensor, physiological sensors, student engagement

Procedia PDF Downloads 77
1263 The Creation of Calcium Phosphate Coating on Nitinol Substrate

Authors: Kirill M. Dubovikov, Ekaterina S. Marchenko, Gulsharat A. Baigonakova

Abstract:

NiTi alloys are widely used as implants in medicine due to their unique properties such as superelasticity, shape memory effect and biocompatibility. However, despite these properties, one of the major problems is the release of nickel after prolonged use in the human body under dynamic stress. This occurs due to oxidation and cracking of NiTi implants, which provokes nickel segregation from the matrix to the surface and release into living tissues. As we know, nickel is a toxic element and can cause cancer, allergies, etc. One of the most popular ways to solve this problem is to create a corrosion resistant coating on NiTi. There are many coatings of this type, but not all of them have good biocompatibility, which is very important for medical implants. Coatings based on calcium phosphate phases have excellent biocompatibility because Ca and P are the main constituents of the mineral part of human bone. This fact suggests that a Ca-P coating on NiTi can enhance osteogenesis and accelerate the healing process. Therefore, the aim of this study is to investigate the structure of Ca-P coating on NiTi substrate. Plasma assisted radio frequency (RF) sputtering was used to obtain this film. This method was chosen because it allows the crystallinity and morphology of the Ca-P coating to be controlled by the sputtering parameters. It allows us to obtain three different NiTi samples with Ca-P coating. XRD, AFM, SEM and EDS were used to study the composition, structure and morphology of the coating phase. Scratch tests were carried out to evaluate the adhesion of the coating to the substrate. Wettability tests were used to investigate the hydrophilicity of the different coatings and to suggest which of them had better biocompatibility. XRD showed that the coatings of all samples were hydroxyapatite, but the matrix was represented by TiNi intermetallic compounds such as B2, Ti2Ni and Ni3Ti. The SEM shows that the densest and defect-free coating has only one sample after three hours of sputtering. Wettability tests show that the sample with the densest coating has the lowest contact angle of 40.2° and the largest free surface area of 57.17 mJ/m2, which is mostly disperse. A scratch test was carried out to investigate the adhesion of the coating to the surface and it was shown that all coatings were removed by a cohesive mechanism. However, at a load of 30N, the indenter reached the substrate in two out of three samples, except for the sample with the densest coating. It was concluded that the most promising sputtering mode was the third, which consisted of three hours of deposition. This mode produced a defect-free Ca-P coating with good wettability and adhesion.

Keywords: biocompatibility, calcium phosphate coating, NiTi alloy, radio frequency sputtering.

Procedia PDF Downloads 59
1262 Status of Participative Governance Practices in Higher Education: Implications for Stakeholders' Transformative Role-Assumption

Authors: Endalew Fufa Kufi

Abstract:

The research investigated the role of stakeholders such as students, teachers and administrators in the practices of good governance in higher education by looking into the special contributions of top-officials, teachers and students in ensuring workable ties and productive interchanges in Adama Science and Technology University. Attention was given to participation, fairness and exemplariness as key indicators of good governance. The target university was chosen for its familiarity for the researcher to get dependable data, access to respondent and management of the processing of data. Descriptive survey design was used for the purpose of describing concerned roles the stakeholders in the university governance in order to reflect on the nature of participation of the practices. Centres of the research were administration where supportive groups such as central administrators and underlying service-givers had parts and academia where teachers and students were target. Generally, 60 teachers, 40 students and 15 administrative officers were referents. Data were collected in the form of self-report through open-ended questionnaires. The findings indicated that, while vertical interchanges in terms of academic and administrative routines were had normal flow on top-down basis, planned practices of stakeholders in decision-making and reasonably communicating roles and changes in decisions with top-officials were not efficiently practiced. Moreover, the practices of good modelling were not witnessed to have existed to the fullest extent. Rather, existence of a very wide gap between the academic and administrative staffs was witnessed as was reflected the case between teachers and students. The implication was such that for shortage in participative atmosphere and weaning of fairness in governance, routine practices have been there as the vicious circles of governance.

Keywords: governance, participative, stakeholders, transformative, role-assumption

Procedia PDF Downloads 383
1261 Fast Switching Mechanism for Multicasting Failure in OpenFlow Networks

Authors: Alaa Allakany, Koji Okamura

Abstract:

Multicast technology is an efficient and scalable technology for data distribution in order to optimize network resources. However, in the IP network, the responsibility for management of multicast groups is distributed among network routers, which causes some limitations such as delays in processing group events, high bandwidth consumption and redundant tree calculation. Software Defined Networking (SDN) represented by OpenFlow presented as a solution for many problems, in SDN the control plane and data plane are separated by shifting the control and management to a remote centralized controller, and the routers are used as a forwarder only. In this paper we will proposed fast switching mechanism for solving the problem of link failure in multicast tree based on Tabu Search heuristic algorithm and modifying the functions of OpenFlow switch to fasts switch to the pack up sub tree rather than sending to the controller. In this work we will implement multicasting OpenFlow controller, this centralized controller is a core part in our multicasting approach, which is responsible for 1- constructing the multicast tree, 2- handling the multicast group events and multicast state maintenance. And finally modifying OpenFlow switch functions for fasts switch to pack up paths. Forwarders, forward the multicast packet based on multicast routing entries which were generated by the centralized controller. Tabu search will be used as heuristic algorithm for construction near optimum multicast tree and maintain multicast tree to still near optimum in case of join or leave any members from multicast group (group events).

Keywords: multicast tree, software define networks, tabu search, OpenFlow

Procedia PDF Downloads 247
1260 A Comparative Study of Sampling-Based Uncertainty Propagation with First Order Error Analysis and Percentile-Based Optimization

Authors: M. Gulam Kibria, Shourav Ahmed, Kais Zaman

Abstract:

In system analysis, the information on the uncertain input variables cause uncertainty in the system responses. Different probabilistic approaches for uncertainty representation and propagation in such cases exist in the literature. Different uncertainty representation approaches result in different outputs. Some of the approaches might result in a better estimation of system response than the other approaches. The NASA Langley Multidisciplinary Uncertainty Quantification Challenge (MUQC) has posed challenges about uncertainty quantification. Subproblem A, the uncertainty characterization subproblem, of the challenge posed is addressed in this study. In this subproblem, the challenge is to gather knowledge about unknown model inputs which have inherent aleatory and epistemic uncertainties in them with responses (output) of the given computational model. We use two different methodologies to approach the problem. In the first methodology we use sampling-based uncertainty propagation with first order error analysis. In the other approach we place emphasis on the use of Percentile-Based Optimization (PBO). The NASA Langley MUQC’s subproblem A is developed in such a way that both aleatory and epistemic uncertainties need to be managed. The challenge problem classifies each uncertain parameter as belonging to one the following three types: (i) An aleatory uncertainty modeled as a random variable. It has a fixed functional form and known coefficients. This uncertainty cannot be reduced. (ii) An epistemic uncertainty modeled as a fixed but poorly known physical quantity that lies within a given interval. This uncertainty is reducible. (iii) A parameter might be aleatory but sufficient data might not be available to adequately model it as a single random variable. For example, the parameters of a normal variable, e.g., the mean and standard deviation, might not be precisely known but could be assumed to lie within some intervals. It results in a distributional p-box having the physical parameter with an aleatory uncertainty, but the parameters prescribing its mathematical model are subjected to epistemic uncertainties. Each of the parameters of the random variable is an unknown element of a known interval. This uncertainty is reducible. From the study, it is observed that due to practical limitations or computational expense, the sampling is not exhaustive in sampling-based methodology. That is why the sampling-based methodology has high probability of underestimating the output bounds. Therefore, an optimization-based strategy to convert uncertainty described by interval data into a probabilistic framework is necessary. This is achieved in this study by using PBO.

Keywords: aleatory uncertainty, epistemic uncertainty, first order error analysis, uncertainty quantification, percentile-based optimization

Procedia PDF Downloads 224
1259 Beneficiation of Pulp and Paper Mill Sludge for the Generation of Single Cell Protein for Fish Farming

Authors: Lucretia Ramnath

Abstract:

Fishmeal is extensively used for fish farming but is an expensive fish feed ingredient. A cheaper alternate to fishmeal is single cell protein (SCP) which can be cultivated on fermentable sugars recovered from organic waste streams such as pulp and paper mill sludge (PPMS). PPMS has a high cellulose content, thus is suitable for glucose recovery through enzymatic hydrolysis but is hampered by lignin and ash. To render PPMS amenable for enzymatic hydrolysis, the PPMS waspre-treated to produce a glucose-rich hydrolysate which served as a feed stock for the production of fungal SCP. The PPMS used in this study had the following composition: 72.77% carbohydrates, 8.6% lignin, and 18.63% ash. The pre-treatments had no significant effect on lignin composition but had a substantial effect on carbohydrate and ash content. Enzymatic hydrolysis of screened PPMS was previously optimized through response surface methodology (RSM) and 2-factorial design. The optimized protocol resulted in a hydrolysate containing 46.1 g/L of glucose, of which 86% was recovered after downstream processing by passing through a 100-mesh sieve (38 µm pore size). Vogel’s medium supplemented with 10 g/L hydrolysate successfully supported the growth of Fusarium venenatum, conducted using standard growth conditions; pH 6, 200 rpm, 2.88 g/L ammonium phosphate, 25°C. A maximum F. venenatum biomass of 45 g/L was produced with a yield coefficient of 4.67. Pulp and paper mill sludge hydrolysate contained approximately five times more glucose than what was needed for SCP production and served as a suitable carbon source. We have shown that PPMS can be successfully beneficiated for SCP production.

Keywords: pulp and paper waste, fungi, single cell protein, hydrolysate

Procedia PDF Downloads 186
1258 Automated Fact-Checking by Incorporating Contextual Knowledge and Multi-Faceted Search

Authors: Wenbo Wang, Yi-Fang Brook Wu

Abstract:

The spread of misinformation and disinformation has become a major concern, particularly with the rise of social media as a primary source of information for many people. As a means to address this phenomenon, automated fact-checking has emerged as a safeguard against the spread of misinformation and disinformation. Existing fact-checking approaches aim to determine whether a news claim is true or false, and they have achieved decent veracity prediction accuracy. However, the state-of-the-art methods rely on manually verified external information to assist the checking model in making judgments, which requires significant human resources. This study introduces a framework, SAC, which focuses on 1) augmenting the representation of a claim by incorporating additional context using general-purpose, comprehensive, and authoritative data; 2) developing a search function to automatically select relevant, new, and credible references; 3) focusing on the important parts of the representations of a claim and its reference that are most relevant to the fact-checking task. The experimental results demonstrate that 1) Augmenting the representations of claims and references through the use of a knowledge base, combined with the multi-head attention technique, contributes to improved performance of fact-checking. 2) SAC with auto-selected references outperforms existing fact-checking approaches with manual selected references. Future directions of this study include I) exploring knowledge graphs in Wikidata to dynamically augment the representations of claims and references without introducing too much noise, II) exploring semantic relations in claims and references to further enhance fact-checking.

Keywords: fact checking, claim verification, deep learning, natural language processing

Procedia PDF Downloads 46
1257 Experimental Investigation of the Effect of Glass Granulated Blast Furnace Slag on Pavement Quality Concrete Pavement Made of Recycled Asphalt Pavement Material

Authors: Imran Altaf Wasil, Dinesh Ganvir

Abstract:

Due to a scarcity of virgin aggregates, the use of reclaimed asphalt pavement (RAP) as a substitute for natural aggregates has gained popularity. Despite the fact that RAP is recycled in asphalt pavement, there is still excess RAP, and its use in concrete pavements has expanded in recent years. According to a survey, 98 percent of India's pavements are flexible. As a result, the maintenance and reconstruction of such pavements generate RAP, which can be reused in concrete pavements as well as surface course, base course, and sub-base of flexible pavements. Various studies on the properties of reclaimed asphalt pavement and its optimal requirements for usage in concrete has been conducted throughout the years. In this study a total of four different mixes were prepared by partially replacing natural aggregates by RAP in different proportions. It was found that with the increase in the replacement level of Natural aggregates by RAP the mechanical and durability properties got reduced. In order to increase the mechanical strength of mixes 40% Glass Granulated Blast Furnace Slag (GGBS) was used and it was found that with replacement of cement by 40% of GGBS, there was an enhancement in the mechanical and durability properties of RAP inclusive PQC mixes. The reason behind the improvement in the properties is due to the processing technique used in order to remove the contaminant layers present in the coarse RAP aggregates. The replacement level of Natural aggregate with RAP was done in proportions of 20%, 40% and 60% along with the partial replacement of cement by 40% GGBS. It was found that all the mixes surpassed the design target value of 40 MPa in compression and 4.5 MPa in flexure making it much more economical and feasible.

Keywords: reclaimed asphalt pavement, pavement quality concrete, glass granulated blast furnace slag, mechanical and durability properties

Procedia PDF Downloads 99
1256 Effect of Marketing Strategy on the Performance of Small and Medium Enterprises in Nigeria

Authors: Kadiri Kayode Ibrahim, Kadiri Omowunmi

Abstract:

The research study was concerned with an evaluation of the effect of marketing strategy on the performance of SMEs in Abuja. This was achieved, specifically, through the examination of the effect of disaggregated components of Marketing Strategy (Product, Price, Promotion, Placement and Process) on Sales Volume (as a proxy for performance). The study design was causal in nature, with the use of quantitative methods involving a cross-sectional survey carried out with the administration of a structured questionnaire. A multistage sample of 398 respondents was utilized to provide the primary data used in the study. Subsequently, path analysis was employed in processing the obtained data and testing formulated hypotheses. Findings from the study indicated that all modeled components of marketing strategy were positive and statistically significant determinants of performance among businesses in the zone. It was, therefore, recommended that SMEs invest in continuous product innovation and development that are in line with the needs and preferences of the target market, as well as adopt a dynamic pricing strategy that considers both cost factors and market conditions. It is, therefore, crucial that businesses in the zone adopt marker communication measures that would stimulate brand awareness and increase engagement, including the use of social media platforms and content marketing. Additionally, owner-managers should ensure that their products are readily available to their target customers through an emphasis on availability and accessibility measures. Furthermore, a commitment to consistent optimization of internal operations is crucial for improved productivity, reduced costs, and enhanced customer satisfaction, which in turn will positively impact their overall performance.

Keywords: product, price, promotion, placement

Procedia PDF Downloads 11
1255 StockTwits Sentiment Analysis on Stock Price Prediction

Authors: Min Chen, Rubi Gupta

Abstract:

Understanding and predicting stock market movements is a challenging problem. It is believed stock markets are partially driven by public sentiments, which leads to numerous research efforts to predict stock market trend using public sentiments expressed on social media such as Twitter but with limited success. Recently a microblogging website StockTwits is becoming increasingly popular for users to share their discussions and sentiments about stocks and financial market. In this project, we analyze the text content of StockTwits tweets and extract financial sentiment using text featurization and machine learning algorithms. StockTwits tweets are first pre-processed using techniques including stopword removal, special character removal, and case normalization to remove noise. Features are extracted from these preprocessed tweets through text featurization process using bags of words, N-gram models, TF-IDF (term frequency-inverse document frequency), and latent semantic analysis. Machine learning models are then trained to classify the tweets' sentiment as positive (bullish) or negative (bearish). The correlation between the aggregated daily sentiment and daily stock price movement is then investigated using Pearson’s correlation coefficient. Finally, the sentiment information is applied together with time series stock data to predict stock price movement. The experiments on five companies (Apple, Amazon, General Electric, Microsoft, and Target) in a duration of nine months demonstrate the effectiveness of our study in improving the prediction accuracy.

Keywords: machine learning, sentiment analysis, stock price prediction, tweet processing

Procedia PDF Downloads 135
1254 Transparency Obligations under the AI Act Proposal: A Critical Legal Analysis

Authors: Michael Lognoul

Abstract:

In April 2021, the European Commission released its AI Act Proposal, which is the first policy proposal at the European Union level to target AI systems comprehensively, in a horizontal manner. This Proposal notably aims to achieve an ecosystem of trust in the European Union, based on the respect of fundamental rights, regarding AI. Among many other requirements, the AI Act Proposal aims to impose several generic transparency obligationson all AI systems to the benefit of natural persons facing those systems (e.g. information on the AI nature of systems, in case of an interaction with a human). The Proposal also provides for more stringent transparency obligations, specific to AI systems that qualify as high-risk, to the benefit of their users, notably on the characteristics, capabilities, and limitations of the AI systems they use. Against that background, this research firstly presents all such transparency requirements in turn, as well as related obligations, such asthe proposed obligations on record keeping. Secondly, it focuses on a legal analysis of their scope of application, of the content of the obligations, and on their practical implications. On the scope of transparency obligations tailored for high-risk AI systems, the research notably notes that it seems relatively narrow, given the proposed legal definition of the notion of users of AI systems. Hence, where end-users do not qualify as users, they may only receive very limited information. This element might potentially raise concern regarding the objective of the Proposal. On the content of the transparency obligations, the research highlights that the information that should benefit users of high-risk AI systems is both very broad and specific, from a technical perspective. Therefore, the information required under those obligations seems to create, prima facie, an adequate framework to ensure trust for users of high-risk AI systems. However, on the practical implications of these transparency obligations, the research notes that concern arises due to potential illiteracy of high-risk AI systems users. They might not benefit from sufficient technical expertise to fully understand the information provided to them, despite the wording of the Proposal, which requires that information should be comprehensible to its recipients (i.e. users).On this matter, the research points that there could be, more broadly, an important divergence between the level of detail of the information required by the Proposal and the level of expertise of users of high-risk AI systems. As a conclusion, the research provides policy recommendations to tackle (part of) the issues highlighted. It notably recommends to broaden the scope of transparency requirements for high-risk AI systems to encompass end-users. It also suggests that principles of explanation, as they were put forward in the Guidelines for Trustworthy AI of the High Level Expert Group, should be included in the Proposal in addition to transparency obligations.

Keywords: aI act proposal, explainability of aI, high-risk aI systems, transparency requirements

Procedia PDF Downloads 278
1253 Accessibility Assessment of School Facilities Using Geospatial Technologies: A Case Study of District Sheikhupura

Authors: Hira Jabbar

Abstract:

Education is vital for inclusive growth of an economy and a critical contributor for investment in human capital. Like other developing countries, Pakistan is facing enormous challenges regarding the provision of public facilities, improper infrastructure planning, accelerating rate of population and poor accessibility. The influence of the rapid advancement and innovations in GIS and RS techniques have proved to be a useful tool for better planning and decision making to encounter these challenges. Therefore present study incorporates GIS and RS techniques to investigate the spatial distribution of school facilities, identifies settlements with served and unserved population, finds potential areas for new schools based on population and develops an accessibility index to evaluate the higher accessibility for schools. For this purpose high-resolution worldview imagery was used to develop road network, settlements and school facilities and to generate school accessibility for each level. Landsat 8 imagery was utilized to extract built-up area by applying pre and post-processing models and Landscan 2015 was used to analyze population statistics. Service area analysis was performed using network analyst extension in ArcGIS 10.3v and results were evaluated for served and underserved areas and population. An accessibility tool was used to evaluate a set of potential destinations to determine which is the most accessible with the given population distribution. Findings of the study may contribute to facilitating the town planners and education authorities for understanding the existing patterns of school facilities. It is concluded that GIS and remote sensing can be effectively used in urban transport and facility planning.

Keywords: accessibility, geographic information system, landscan, worldview

Procedia PDF Downloads 311
1252 Marketing Parameters on Consumer's Perceptions of Farmed Sea Bass in Greece

Authors: Sophia Anastasiou, Cosmas Nathanailides, Fotini Kakali, Kostas Karipoglou

Abstract:

Wild fish are considered as testier and in fish restaurants are offered at twice the price of farmed fish. Several chemical and structural differences can affect the consumer's attitudes for farmed fish. The structure and chemical composition of fish muscle is also important for the performance of farmed fish during handling, storage and processing. In the present work we present the chemical and sensory parameters which are used as indicators of fish flesh quality and we investigated the perceptions of consumers for farmed sea bass and the organoleptic differences between samples of wild and farmed sea bass. A questionnaire was distributed to a group of various ages that were regular consumers of sea bass. The questionnaire included a survey on the perceptions on taste and appearance differences between wild and farmed sea bass. A significant percentage (>40%) of the participants stated their perception of superior taste of wild sea bass versus the farmed fish. The participants took part in an organoleptic assessment of wild and farmed sea bass prepared and cooked by a local fish restaurant. Portions were evaluated for intensity of sensorial attributes from 1 (low intensity) to 5 (high intensity). The results indicate that contrary to the assessor's perception, farmed sea bass scored better in al organoleptic parameters assessed with marked superiority in texture and taste over the wild sea bass. This research has been co-financed by the European Union (European Social Fund – ESF) and Greek national funds through the Operational Program "Education and Lifelong Learning" of the National Strategic Reference Framework (NSRF) - Research Funding Program: ARCHIMEDES III. Investing in knowledge society through the European Social Fund.

Keywords: fish marketing, farmed fish, seafood quality, wild fish

Procedia PDF Downloads 387
1251 A Two Server Poisson Queue Operating under FCFS Discipline with an ‘m’ Policy

Authors: R. Sivasamy, G. Paulraj, S. Kalaimani, N.Thillaigovindan

Abstract:

For profitable businesses, queues are double-edged swords and hence the pain of long wait times in a queue often frustrates customers. This paper suggests a technical way of reducing the pain of lines through a Poisson M/M1, M2/2 queueing system operated by two heterogeneous servers with an objective of minimising the mean sojourn time of customers served under the queue discipline ‘First Come First Served with an ‘m’ policy, i.e. FCFS-m policy’. Arrivals to the system form a Poisson process of rate λ and are served by two exponential servers. The service times of successive customers at server ‘j’ are independent and identically distributed (i.i.d.) random variables and each of it is exponentially distributed with rate parameter μj (j=1, 2). The primary condition for implementing the queue discipline ‘FCFS-m policy’ on these service rates μj (j=1, 2) is that either (m+1) µ2 > µ1> m µ2 or (m+1) µ1 > µ2> m µ1 must be satisfied. Further waiting customers prefer the server-1 whenever it becomes available for service, and the server-2 should be installed if and only if the queue length exceeds the value ‘m’ as a threshold. Steady-state results on queue length and waiting time distributions have been obtained. A simple way of tracing the optimal service rate μ*2 of the server-2 is illustrated in a specific numerical exercise to equalize the average queue length cost with that of the service cost. Assuming that the server-1 has to dynamically adjust the service rates as μ1 during the system size is strictly less than T=(m+2) while μ2=0, and as μ1 +μ2 where μ2>0 if the system size is more than or equal to T, corresponding steady state results of M/M1+M2/1 queues have been deduced from those of M/M1,M2/2 queues. To conclude this investigation has a viable application, results of M/M1+M2/1 queues have been used in processing of those waiting messages into a single computer node and to measure the power consumption by the node.

Keywords: two heterogeneous servers, M/M1, M2/2 queue, service cost and queue length cost, M/M1+M2/1 queue

Procedia PDF Downloads 350
1250 Innovative Waste Management Practices in Remote Areas

Authors: Dolores Hidalgo, Jesús M. Martín-Marroquín, Francisco Corona

Abstract:

Municipal waste consist of a variety of items that are everyday discarded by the population. They are usually collected by municipalities and include waste generated by households, commercial activities (local shops) and public buildings. The composition of municipal waste varies greatly from place to place, being mostly related to levels and patterns of consumption, rates of urbanization, lifestyles, and local or national waste management practices. Each year, a huge amount of resources is consumed in the EU, and according to that, also a huge amount of waste is produced. The environmental problems derived from the management and processing of these waste streams are well known, and include impacts on land, water and air. The situation in remote areas is even worst. Difficult access when climatic conditions are adverse, remoteness of centralized municipal treatment systems or dispersion of the population, are all factors that make remote areas a real municipal waste treatment challenge. Furthermore, the scope of the problem increases significantly because the total lack of awareness of the existing risks in this area together with the poor implementation of advanced culture on waste minimization and recycling responsibly. The aim of this work is to analyze the existing situation in remote areas in reference to the production of municipal waste and evaluate the efficiency of different management alternatives. Ideas for improving waste management in remote areas include, for example: the implementation of self-management systems for the organic fraction; establish door-to-door collection models; promote small-scale treatment facilities or adjust the rates of waste generation thereof.

Keywords: door to door collection, islands, isolated areas, municipal waste, remote areas, rural communities

Procedia PDF Downloads 246
1249 Efficiency of PCR-RFLP for the Identification of Adulteries in Meat Formulation

Authors: Hela Gargouri, Nizar Moalla, Hassen Hadj Kacem

Abstract:

Meat adulteration affecting the safety and quality of food is becoming one of the main concerns of public interest across the world. The drastic consequences on the meat industry highlighted the urgent necessity to control the products' quality and to point out the complexity of both supply and processing circuits. Due to the expansion of this problem, the authentic testing of foods, particularly meat and its products, is deemed crucial to avoid unfair market competition and to protect consumers from fraudulent practices of meat adulteration. The adoption of authentication methods by the food quality-control laboratories is becoming a priority issue. However, in some developing countries, the number of food tests is still insignificant, although a variety of processed and traditional meat products are widely consumed. Little attention has been paid to provide an easy, fast, reproducible, and low-cost molecular test, which could be conducted in a basic laboratory. In the current study, the 359 bp fragment of the cytochrome-b gene was mapped by PCR-RFLP using firstly fresh biological supports (DNA and meat) and then turkey salami as an example of commercial processed meat. This technique has been established through several optimizations, namely: the selection of restriction enzymes. The digestion with BsmAI, SspI, and TaaI succeed to identify the seven included animal species when meat is formed by individual species and when the meat is a mixture of different origin. In this study, the PCR-RFLP technique using universal primer succeed to meet our needs by providing an indirect sequencing method identifying by restriction enzymes the specificities characterizing different species on the same amplicon reducing the number of potential tests.

Keywords: adulteration, animal species, authentication, meat, mtDNA, PCR-RFLP

Procedia PDF Downloads 98
1248 Development of Vertically Integrated 2D Lake Victoria Flow Models in COMSOL Multiphysics

Authors: Seema Paul, Jesper Oppelstrup, Roger Thunvik, Vladimir Cvetkovic

Abstract:

Lake Victoria is the second largest fresh water body in the world, located in East Africa with a catchment area of 250,000 km², of which 68,800 km² is the actual lake surface. The hydrodynamic processes of the shallow (40–80 m deep) water system are unique due to its location at the equator, which makes Coriolis effects weak. The paper describes a St.Venant shallow water model of Lake Victoria developed in COMSOL Multiphysics software, a general purpose finite element tool for solving partial differential equations. Depth soundings taken in smaller parts of the lake were combined with recent more extensive data to resolve the discrepancies of the lake shore coordinates. The topography model must have continuous gradients, and Delaunay triangulation with Gaussian smoothing was used to produce the lake depth model. The model shows large-scale flow patterns, passive tracer concentration and water level variations in response to river and tracer inflow, rain and evaporation, and wind stress. Actual data of precipitation, evaporation, in- and outflows were applied in a fifty-year simulation model. It should be noted that the water balance is dominated by rain and evaporation and model simulations are validated by Matlab and COMSOL. The model conserves water volume, the celerity gradients are very small, and the volume flow is very slow and irrotational except at river mouths. Numerical experiments show that the single outflow can be modelled by a simple linear control law responding only to mean water level, except for a few instances. Experiments with tracer input in rivers show very slow dispersion of the tracer, a result of the slow mean velocities, in turn, caused by the near-balance of rain with evaporation. The numerical and hydrodynamical model can evaluate the effects of wind stress which is exerted by the wind on the lake surface that will impact on lake water level. Also, model can evaluate the effects of the expected climate change, as manifest in changes to rainfall over the catchment area of Lake Victoria in the future.

Keywords: bathymetry, lake flow and steady state analysis, water level validation and concentration, wind stress

Procedia PDF Downloads 211
1247 Designing Agricultural Irrigation Systems Using Drone Technology and Geospatial Analysis

Authors: Yongqin Zhang, John Lett

Abstract:

Geospatial technologies have been increasingly used in agriculture for various applications and purposes in recent years. Unmanned aerial vehicles (drones) fit the needs of farmers in farming operations, from field spraying to grow cycles and crop health. In this research, we conducted a practical research project that used drone technology to design and map optimal locations and layouts of irrigation systems for agriculture farms. We flew a DJI Mavic 2 Pro drone to acquire aerial remote sensing images over two agriculture fields in Forest, Mississippi, in 2022. Flight plans were first designed to capture multiple high-resolution images via a 20-megapixel RGB camera mounted on the drone over the agriculture fields. The Drone Deploy web application was then utilized to develop flight plans and subsequent image processing and measurements. The images were orthorectified and processed to estimate the area of the area and measure the locations of the water line and sprinkle heads. Field measurements were conducted to measure the ground targets and validate the aerial measurements. Geospatial analysis and photogrammetric measurements were performed for the study area to determine optimal layout and quantitative estimates for irrigation systems. We created maps and tabular estimates to demonstrate the locations, spacing, amount, and layout of sprinkler heads and water lines to cover the agricultural fields. This research project provides scientific guidance to Mississippi farmers for a precision agricultural irrigation practice.

Keywords: drone images, agriculture, irrigation, geospatial analysis, photogrammetric measurements

Procedia PDF Downloads 61
1246 Application of Remote Sensing for Monitoring the Impact of Lapindo Mud Sedimentation for Mangrove Ecosystem, Case Study in Sidoarjo, East Java

Authors: Akbar Cahyadhi Pratama Putra, Tantri Utami Widhaningtyas, M. Randy Aswin

Abstract:

Indonesia as an archipelagic nation have very long coastline which have large potential marine resources, one of that is the mangrove ecosystems. Lapindo mudflow disaster in Sidoarjo, East Java requires mudflow flowed into the sea through the river Brantas and Porong. Mud material that transported by river flow is feared dangerous because they contain harmful substances such as heavy metals. This study aims to map the mangrove ecosystem seen from its density and knowing how big the impact of a disaster on the Lapindo mud to mangrove ecosystem and accompanied by efforts to address the mangrove ecosystem that maintained continuity. Mapping coastal mangrove conditions of Sidoarjo was done using remote sensing products that Landsat 7 ETM + images with dry months of recording time in 2002, 2006, 2009, and 2014. The density of mangrove detected using NDVI that uses the band 3 that is the red channel and band 4 that is near IR channel. Image processing was used to produce NDVI using ENVI 5.1 software. NDVI results were used for the detection of mangrove density is 0-1. The development of mangrove ecosystems of both area and density from year to year experienced has a significant increase. Mangrove ecosystems growths are affected by material deposition area of Lapindo mud on Porong and Brantas river estuary, where the silt is growing medium suitable mangrove ecosystem and increasingly growing. Increasing the density caused support by public awareness to prevent heavy metals in the material so that the Lapindo mud mangrove breeding done around the farm.

Keywords: archipelagic nation, mangrove, Lapindo mudflow disaster, NDVI

Procedia PDF Downloads 419
1245 Lamb Waves Wireless Communication in Healthy Plates Using Coherent Demodulation

Authors: Rudy Bahouth, Farouk Benmeddour, Emmanuel Moulin, Jamal Assaad

Abstract:

Guided ultrasonic waves are used in Non-Destructive Testing (NDT) and Structural Health Monitoring (SHM) for inspection and damage detection. Recently, wireless data transmission using ultrasonic waves in solid metallic channels has gained popularity in some industrial applications such as nuclear, aerospace and smart vehicles. The idea is to find a good substitute for electromagnetic waves since they are highly attenuated near metallic components due to Faraday shielding. The proposed solution is to use ultrasonic guided waves such as Lamb waves as an information carrier due to their capability of propagation for long distances. In addition to this, valuable information about the health of the structure could be extracted simultaneously. In this work, the reliable frequency bandwidth for communication is extracted experimentally from dispersion curves at first. Then, an experimental platform for wireless communication using Lamb waves is described and built. After this, coherent demodulation algorithm used in telecommunications is tested for Amplitude Shift Keying, On-Off Keying and Binary Phase Shift Keying modulation techniques. Signal processing parameters such as threshold choice, number of cycles per bit and Bit Rate are optimized. Experimental results are compared based on the average Bit Error Rate. Results have shown high sensitivity to threshold selection for Amplitude Shift Keying and On-Off Keying techniques resulting a Bit Rate decrease. Binary Phase Shift Keying technique shows the highest stability and data rate between all tested modulation techniques.

Keywords: lamb waves communication, wireless communication, coherent demodulation, bit error rate

Procedia PDF Downloads 230
1244 Corrosion Analysis and Interfacial Characterization of Al – Steel Metal Inert Gas Weld - Braze Dissimilar Joints by Micro Area X-Ray Diffraction Technique

Authors: S. S. Sravanthi, Swati Ghosh Acharyya

Abstract:

Automotive light weighting is of major prominence in the current times due to its contribution in improved fuel economy and reduced environmental pollution. Various arc welding technologies are being employed in the production of automobile components with reduced weight. The present study is of practical importance since it involves preferential substitution of Zinc coated mild steel with a light weight alloy such as 6061 Aluminium by means of Gas Metal Arc Welding (GMAW) – Brazing technique at different processing parameters. However, the fabricated joints have shown the generation of Al – Fe layer at the interfacial regions which was confirmed by the Scanning Electron Microscope and Energy Dispersion Spectroscopy. These Al-Fe compounds not only affect the mechanical strength, but also predominantly deteriorate the corrosion resistance of the joints. Hence, it is essential to understand the phases formed in this layer and their crystal structure. Micro area X - ray diffraction technique has been exclusively used for this study. Moreover, the crevice corrosion analysis at the joint interfaces was done by exposing the joints to 5 wt.% FeCl3 solution at regular time intervals as per ASTM G 48-03. The joints have shown a decreased crevice corrosion resistance with increased heat intensity. Inner surfaces of welds have shown severe oxide cracking and a remarkable weight loss when exposed to concentrated FeCl3. The weight loss was enhanced with decreased filler wire feed rate and increased heat intensity. 

Keywords: automobiles, welding, corrosion, lap joints, Micro XRD

Procedia PDF Downloads 113
1243 A Phenomenological Approach to Computational Modeling of Analogy

Authors: José Eduardo García-Mendiola

Abstract:

In this work, a phenomenological approach to computational modeling of analogy processing is carried out. The paper goes through the consideration of the structure of the analogy, based on the possibility of sustaining the genesis of its elements regarding Husserl's genetic theory of association. Among particular processes which take place in order to get analogical inferences, there is one which arises crucial for enabling efficient base cases retrieval through long-term memory, namely analogical transference grounded on familiarity. In general, it has been argued that analogical reasoning is a way by which a conscious agent tries to determine or define a certain scope of objects and relationships between them using previous knowledge of other familiar domain of objects and relations. However, looking for a complete description of analogy process, a deeper consideration of phenomenological nature is required in so far, its simulation by computational programs is aimed. Also, one would get an idea of how complex it would be to have a fully computational account of the analogy elements. In fact, familiarity is not a result of a mere chain of repetitions of objects or events but generated insofar as the object/attribute or event in question is integrable inside a certain context that is taking shape as functionalities and functional approaches or perspectives of the object are being defined. Its familiarity is generated not by the identification of its parts or objective determinations as if they were isolated from those functionalities and approaches. Rather, at the core of such a familiarity between entities of different kinds lays the way they are functionally encoded. So, and hoping to make deeper inroads towards these topics, this essay allows us to consider that cognitive-computational perspectives can visualize, from the phenomenological projection of the analogy process reviewing achievements already obtained as well as exploration of new theoretical-experimental configurations towards implementation of analogy models in specific as well as in general purpose machines.

Keywords: analogy, association, encoding, retrieval

Procedia PDF Downloads 102
1242 Impact of Interface Soil Layer on Groundwater Aquifer Behaviour

Authors: Hayder H. Kareem, Shunqi Pan

Abstract:

The geological environment where the groundwater is collected represents the most important element that affects the behaviour of groundwater aquifer. As groundwater is a worldwide vital resource, it requires knowing the parameters that affect this source accurately so that the conceptualized mathematical models would be acceptable to the broadest ranges. Therefore, groundwater models have recently become an effective and efficient tool to investigate groundwater aquifer behaviours. Groundwater aquifer may contain aquitards, aquicludes, or interfaces within its geological formations. Aquitards and aquicludes have geological formations that forced the modellers to include those formations within the conceptualized groundwater models, while interfaces are commonly neglected from the conceptualization process because the modellers believe that the interface has no effect on aquifer behaviour. The current research highlights the impact of an interface existing in a real unconfined groundwater aquifer called Dibdibba, located in Al-Najaf City, Iraq where it has a river called the Euphrates River that passes through the eastern part of this city. Dibdibba groundwater aquifer consists of two types of soil layers separated by an interface soil layer. A groundwater model is built for Al-Najaf City to explore the impact of this interface. Calibration process is done using PEST 'Parameter ESTimation' approach and the best Dibdibba groundwater model is obtained. When the soil interface is conceptualized, results show that the groundwater tables are significantly affected by that interface through appearing dry areas of 56.24 km² and 6.16 km² in the upper and lower layers of the aquifer, respectively. The Euphrates River will also leak water into the groundwater aquifer of 7359 m³/day. While these results are changed when the soil interface is neglected where the dry area became 0.16 km², the Euphrates River leakage became 6334 m³/day. In addition, the conceptualized models (with and without interface) reveal different responses for the change in the recharge rates applied on the aquifer through the uncertainty analysis test. The aquifer of Dibdibba in Al-Najaf City shows a slight deficit in the amount of water supplied by the current pumping scheme and also notices that the Euphrates River suffers from stresses applied to the aquifer. Ultimately, this study shows a crucial need to represent the interface soil layer in model conceptualization to be the intended and future predicted behaviours more reliable for consideration purposes.

Keywords: Al-Najaf City, groundwater aquifer behaviour, groundwater modelling, interface soil layer, Visual MODFLOW

Procedia PDF Downloads 174
1241 A Real-Time Moving Object Detection and Tracking Scheme and Its Implementation for Video Surveillance System

Authors: Mulugeta K. Tefera, Xiaolong Yang, Jian Liu

Abstract:

Detection and tracking of moving objects are very important in many application contexts such as detection and recognition of people, visual surveillance and automatic generation of video effect and so on. However, the task of detecting a real shape of an object in motion becomes tricky due to various challenges like dynamic scene changes, presence of shadow, and illumination variations due to light switch. For such systems, once the moving object is detected, tracking is also a crucial step for those applications that used in military defense, video surveillance, human computer interaction, and medical diagnostics as well as in commercial fields such as video games. In this paper, an object presents in dynamic background is detected using adaptive mixture of Gaussian based analysis of the video sequences. Then the detected moving object is tracked using the region based moving object tracking and inter-frame differential mechanisms to address the partial overlapping and occlusion problems. Firstly, the detection algorithm effectively detects and extracts the moving object target by enhancing and post processing morphological operations. Secondly, the extracted object uses region based moving object tracking and inter-frame difference to improve the tracking speed of real-time moving objects in different video frames. Finally, the plotting method was applied to detect the moving objects effectively and describes the object’s motion being tracked. The experiment has been performed on image sequences acquired both indoor and outdoor environments and one stationary and web camera has been used.

Keywords: background modeling, Gaussian mixture model, inter-frame difference, object detection and tracking, video surveillance

Procedia PDF Downloads 458
1240 Design and Assessment of Base Isolated Structures under Spectrum-Compatible Bidirectional Earthquakes

Authors: Marco Furinghetti, Alberto Pavese, Michele Rinaldi

Abstract:

Concave Surface Slider devices have been more and more used in real applications for seismic protection of both bridge and building structures. Several research activities have been carried out, in order to investigate the lateral response of such a typology of devices, and a reasonably high level of knowledge has been reached. If radial analysis is performed, the frictional force is always aligned with respect to the restoring force, whereas under bidirectional seismic events, a bi-axial interaction of the directions of motion occurs, due to the step-wise projection of the main frictional force, which is assumed to be aligned to the trajectory of the isolator. Nonetheless, if non-linear time history analyses have to be performed, standard codes provide precise rules for the definition of an averagely spectrum-compatible set of accelerograms in radial conditions, whereas for bidirectional motions different combinations of the single components spectra can be found. Moreover, nowadays software for the adjustment of natural accelerograms are available, which lead to a higher quality of spectrum-compatibility and to a smaller dispersion of results for radial motions. In this endeavor a simplified design procedure is defined, for building structures, base-isolated by means of Concave Surface Slider devices. Different case study structures have been analyzed. In a first stage, the capacity curve has been computed, by means of non-linear static analyses on the fixed-base structures: inelastic fiber elements have been adopted and different direction angles of lateral forces have been studied. Thanks to these results, a linear elastic Finite Element Model has been defined, characterized by the same global stiffness of the linear elastic branch of the non-linear capacity curve. Then, non-linear time history analyses have been performed on the base-isolated structures, by applying seven bidirectional seismic events. The spectrum-compatibility of bidirectional earthquakes has been studied, by considering different combinations of single components and adjusting single records: thanks to the proposed procedure, results have shown a small dispersion and a good agreement in comparison to the assumed design values.

Keywords: concave surface slider, spectrum-compatibility, bidirectional earthquake, base isolation

Procedia PDF Downloads 277
1239 Physicochemical Properties and Thermal Inactivation of Polyphenol Oxidase of African Bush Mango (Irvingia Gabonensis) Fruit

Authors: Catherine Joke Adeseko

Abstract:

Enzymatic browning is an economically important disorder that degrades organoleptic properties and prevent the consumer from purchasing fresh fruit and vegetables. Prevention and control of enzymatic browning in fruit and its product is imperative. Therefore, this study sought to investigate the catalytic effect of polyphenol oxidase (PPO) in the adverse browning of African bush mango (Irvingia gabonensis) fruit peel and pulp. PPO was isolated and purified, and its physicochemical properties, such as the effect of pH with SDS, temperature, and thermodynamic studies, which invariably led to thermal inactivation of purified PPO at 80 °C, were evaluated. The pH and temperature optima of PPO were found at 7.0 and 50, respectively. There was a gradual increase in the activity of PPO as the pH increases. However, the enzyme exhibited a higher activity at neutral pH 7.0, while enzymatic inhibition was observed at acidic region, pH 2.0. The presence of SDS at pH 5.0 downward was found to inhibit the activity of PPO from the peel and pulp of I. gabonensis. The average value of enthalpy (ΔH), entropy (ΔS), and Gibbs free energy (ΔG) obtained at 20 min of incubation and temperature 30 – 80 °C were respectively 39.93 kJ.mol-1, 431.57 J.mol-1 .K-1 and -107.99 kJ.mol-1 for peel PPO, and 37.92 kJ.mol-1, -442.51J.mol-1.K-1, and -107.22 kJ.mol-1 for pulp PPO. Thermal inactivation of PPO from I. gabonensis exhibited a reduction in catalytic activity as the temperature and duration of heat inactivation increases using catechol, reflected by an increment in k value. The half-life of PPO (t1/2) decreases as the incubation temperature increases due to the instability of the enzyme at high temperatures and was higher in pulp than peel. Both D and Z values decrease with increase in temperature. The information from this study suggests processing parameters for controlling PPO in the potential industrial application of I. gabonensis fruit in order to prolong the shelf-life of this fruit for maximum utilization.

Keywords: enzymatic, browning, characterization, activity

Procedia PDF Downloads 71
1238 Cognitive Rehabilitation in Schizophrenia: A Review of the Indian Scenario

Authors: Garima Joshi, Pratap Sharan, V. Sreenivas, Nand Kumar, Kameshwar Prasad, Ashima N. Wadhawan

Abstract:

Schizophrenia is a debilitating disorder and is marked by cognitive impairment, which deleteriously impacts the social and professional functioning along with the quality of life of the patients and the caregivers. Often the cognitive symptoms are in their prodromal state and worsen as the illness progresses; they have proven to have a good predictive value for the prognosis of the illness. It has been shown that intensive cognitive rehabilitation (CR) leads to improvements in the healthy as well as cognitively-impaired subjects. As the majority of population in India falls in the lower to middle socio-economic status and have low education levels, using the existing packages, a majority of which are developed in the West, for cognitive rehabilitation becomes difficult. The use of technology is also restricted due to the high costs involved and the limited availability and familiarity with computers and other devices, which pose as an impedance for continued therapy. Cognitive rehabilitation in India uses a plethora of retraining methods for the patients with schizophrenia targeting the functions of attention, information processing, executive functions, learning and memory, and comprehension along with Social Cognition. Psychologists often have to follow an integrative therapy approach involving social skills training, family therapy and psychoeducation in order to maintain the gains from the cognitive rehabilitation in the long run. This paper reviews the methodologies and cognitive retaining programs used in India. It attempts to elucidate the evolution and development of methodologies used, from traditional paper-pencil based retraining to more sophisticated neuroscience-informed techniques in cognitive rehabilitation of deficits in schizophrenia as home-based or supervised and guided programs for cognitive rehabilitation.

Keywords: schizophrenia, cognitive rehabilitation, neuropsychological interventions, integrated approached to rehabilitation

Procedia PDF Downloads 350
1237 Numerical Study of Bubbling Fluidized Beds Operating at Sub-atmospheric Conditions

Authors: Lanka Dinushke Weerasiri, Subrat Das, Daniel Fabijanic, William Yang

Abstract:

Fluidization at vacuum pressure has been a topic that is of growing research interest. Several industrial applications (such as drying, extractive metallurgy, and chemical vapor deposition (CVD)) can potentially take advantage of vacuum pressure fluidization. Particularly, the fine chemical industry requires processing under safe conditions for thermolabile substances, and reduced pressure fluidized beds offer an alternative. Fluidized beds under vacuum conditions provide optimal conditions for treatment of granular materials where the reduced gas pressure maintains an operational environment outside of flammability conditions. The fluidization at low-pressure is markedly different from the usual gas flow patterns of atmospheric fluidization. The different flow regimes can be characterized by the dimensionless Knudsen number. Nevertheless, hydrodynamics of bubbling vacuum fluidized beds has not been investigated to author’s best knowledge. In this work, the two-fluid numerical method was used to determine the impact of reduced pressure on the fundamental properties of a fluidized bed. The slip flow model implemented by Ansys Fluent User Defined Functions (UDF) was used to determine the interphase momentum exchange coefficient. A wide range of operating pressures was investigated (1.01, 0.5, 0.25, 0.1 and 0.03 Bar). The gas was supplied by a uniform inlet at 1.5Umf and 2Umf. The predicted minimum fluidization velocity (Umf) shows excellent agreement with the experimental data. The results show that the operating pressure has a notable impact on the bed properties and its hydrodynamics. Furthermore, it also shows that the existing Gorosko correlation that predicts bed expansion is not applicable under reduced pressure conditions.

Keywords: computational fluid dynamics, fluidized bed, gas-solid flow, vacuum pressure, slip flow, minimum fluidization velocity

Procedia PDF Downloads 121
1236 Estimation Atmospheric parameters for Weather Study and Forecast over Equatorial Regions Using Ground-Based Global Position System

Authors: Asmamaw Yehun, Tsegaye Kassa, Addisu Hunegnaw, Martin Vermeer

Abstract:

There are various models to estimate the neutral atmospheric parameter values, such as in-suite and reanalysis datasets from numerical models. Accurate estimated values of the atmospheric parameters are useful for weather forecasting and, climate modeling and monitoring of climate change. Recently, Global Navigation Satellite System (GNSS) measurements have been applied for atmospheric sounding due to its robust data quality and wide horizontal and vertical coverage. The Global Positioning System (GPS) solutions that includes tropospheric parameters constitute a reliable set of data to be assimilated into climate models. The objective of this paper is, to estimate the neutral atmospheric parameters such as Wet Zenith Delay (WZD), Precipitable Water Vapour (PWV) and Total Zenith Delay (TZD) using six selected GPS stations in the equatorial regions, more precisely, the Ethiopian GPS stations from 2012 to 2015 observational data. Based on historic estimated GPS-derived values of PWV, we forecasted the PWV from 2015 to 2030. During data processing and analysis, we applied GAMIT-GLOBK software packages to estimate the atmospheric parameters. In the result, we found that the annual averaged minimum values of PWV are 9.72 mm for IISC and maximum 50.37 mm for BJCO stations. The annual averaged minimum values of WZD are 6 cm for IISC and maximum 31 cm for BDMT stations. In the long series of observations (from 2012 to 2015), we also found that there is a trend and cyclic patterns of WZD, PWV and TZD for all stations.

Keywords: atmosphere, GNSS, neutral atmosphere, precipitable water vapour

Procedia PDF Downloads 47
1235 Analysis study According Some of Physical and Mechanical Variables for Joint Wrist Injury

Authors: Nabeel Abdulkadhim Athab

Abstract:

The purpose of this research is to conduct a comparative study according analysis of programmed to some of physical and mechanical variables for joint wrist injury. As it can be through this research to distinguish between the amount of variation in the work of the joint after sample underwent rehabilitation program to improve the effectiveness of the joint and naturally restore its effectiveness. Supposed researcher that there is statistically significant differences between the results of the tests pre and post the members research sample, as a result of submission the sample to the program of rehabilitation, which led to the development of muscle activity that are working on wrist joint and this is what led to note the differences between the results of the tests pre and post. The researcher used the descriptive method. The research sample included (6) of injured players in the wrist joint, as the average age (21.68) and standard deviation (1.13) either length average (178cm) and standard deviation (2.08). And the sample as evidenced homogeneous among themselves. And where the data were collected, introduced in program for statistical processing to get to the most important conclusions and recommendations and that the most important: 1-The commitment of the sample program the qualifying process variables studied in the search for the heterogeneity of study activity and effectiveness of wrist joint for injured players. 2-The analysis programmed a high accuracy in the measurement of the research variables, and which led to the possibility of discrimination into account differences in motor ability camel and injured in the wrist joint. To search recommendations including: 1-The use of computer systems in the scientific research for the possibility of obtaining accurate research results. 2-Programming exercises rehabilitation according to an expert system for possible use by patients without reference to the person processor.

Keywords: analysis of joint wrist injury, physical and mechanical variables, wrist joint, wrist injury

Procedia PDF Downloads 420