Search results for: Pressure Decay Method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 21942

Search results for: Pressure Decay Method

13122 Hypersonic Flow of CO2-N2 Mixture around a Spacecraft during the Atmospheric Reentry

Authors: Zineddine Bouyahiaoui, Rabah Haoui

Abstract:

The aim of this work is to analyze a flow around the axisymmetric blunt body taken into account the chemical and vibrational nonequilibrium flow. This work concerns the entry of spacecraft in the atmosphere of the planet Mars. Since the equations involved are non-linear partial derivatives, the volume method is the only way to solve this problem. The choice of the mesh and the CFL is a condition for the convergence to have the stationary solution.

Keywords: blunt body, finite volume, hypersonic flow, viscous flow

Procedia PDF Downloads 228
13121 Stable Time Reversed Integration of the Navier-Stokes Equation Using an Adjoint Gradient Method

Authors: Jurriaan Gillissen

Abstract:

This work is concerned with stabilizing the numerical integration of the Navier-Stokes equation (NSE), backwards in time. Applications involve the detection of sources of, e.g., sound, heat, and pollutants. Stable reverse numerical integration of parabolic differential equations is also relevant for image de-blurring. While the literature addresses the reverse integration problem of the advection-diffusion equation, the problem of numerical reverse integration of the NSE has, to our knowledge, not yet been addressed. Owing to the presence of viscosity, the NSE is irreversible, i.e., when going backwards in time, the fluid behaves, as if it had a negative viscosity. As an effect, perturbations from the perfect solution, due to round off errors or discretization errors, grow exponentially in time, and reverse integration of the NSE is inherently unstable, regardless of using an implicit time integration scheme. Consequently, some sort of filtering is required, in order to achieve a stable, numerical, reversed integration. The challenge is to find a filter with a minimal adverse affect on the accuracy of the reversed integration. In the present work, we explore an adjoint gradient method (AGM) to achieve this goal, and we apply this technique to two-dimensional (2D), decaying turbulence. The AGM solves for the initial velocity field u0 at t = 0, that, when integrated forward in time, produces a final velocity field u1 at t = 1, that is as close as is feasibly possible to some specified target field v1. The initial field u0 defines a minimum of a cost-functional J, that measures the distance between u1 and v1. In the minimization procedure, the u0 is updated iteratively along the gradient of J w.r.t. u0, where the gradient is obtained by transporting J backwards in time from t = 1 to t = 0, using the adjoint NSE. The AGM thus effectively replaces the backward integration by multiple forward and backward adjoint integrations. Since the viscosity is negative in the adjoint NSE, each step of the AGM is numerically stable. Nevertheless, when applied to turbulence, the AGM develops instabilities, which limit the backward integration to small times. This is due to the exponential divergence of phase space trajectories in turbulent flow, which produces a multitude of local minima in J, when the integration time is large. As an effect, the AGM may select unphysical, noisy initial conditions. In order to improve this situation, we propose two remedies. First, we replace the integration by a sequence of smaller integrations, i.e., we divide the integration time into segments, where in each segment the target field v1 is taken as the initial field u0 from the previous segment. Second, we add an additional term (regularizer) to J, which is proportional to a high-order Laplacian of u0, and which dampens the gradients of u0. We show that suitable values for the segment size and for the regularizer, allow a stable reverse integration of 2D decaying turbulence, with accurate results for more then O(10) turbulent, integral time scales.

Keywords: time reversed integration, parabolic differential equations, adjoint gradient method, two dimensional turbulence

Procedia PDF Downloads 217
13120 Recovery of Draw Solution in Forward Osmosis by Direct Contact Membrane Distillation

Authors: Su-Thing Ho, Shiao-Shing Chen, Hung-Te Hsu, Saikat Sinha Ray

Abstract:

Forward osmosis (FO) is an emerging technology for direct and indirect potable water reuse application. However, successful implementation of FO is still hindered by the lack of draw solution recovery with high efficiency. Membrane distillation (MD) is a thermal separation process by using hydrophobic microporous membrane that is kept in sandwich mode between warm feed stream and cold permeate stream. Typically, temperature difference is the driving force of MD which attributed by the partial vapor pressure difference across the membrane. In this study, the direct contact membrane distillation (DCMD) system was used to recover diluted draw solution of FO. Na3PO4 at pH 9 and EDTA-2Na at pH 8 were used as the feed solution for MD since it produces high water flux and minimized salt leakage in FO process. At high pH, trivalent and tetravalent ions are much easier to remain at draw solution side in FO process. The result demonstrated that PTFE with pore size of 1 μm could achieve the highest water flux (12.02 L/m2h), followed by PTFE 0.45 μm (10.05 L/m2h), PTFE 0.1 μm (7.38 L/m2h) and then PP (7.17 L/m2h) while using 0.1 M Na3PO4 draw solute. The concentration of phosphate and conductivity in the PTFE (0.45 μm) permeate were low as 1.05 mg/L and 2.89 μm/cm respectively. Although PTFE with the pore size of 1 μm could obtain the highest water flux, but the concentration of phosphate in permeate was higher than other kinds of MD membranes. This study indicated that four kinds of MD membranes performed well and PTFE with the pore size of 0.45 μm was the best among tested membranes to achieve high water flux and high rejection of phosphate (99.99%) in recovery of diluted draw solution. Besides that, the results demonstrate that it can obtain high water flux and high rejection of phosphate when operated with cross flow velocity of 0.103 m/s with Tfeed of 60 ℃ and Tdistillate of 20 ℃. In addition to that, the result shows that Na3PO4 is more suitable for recovery than EDTA-2Na. Besides that, while recovering the diluted Na3PO4, it can obtain the high purity of permeate water. The overall performance indicates that, the utilization of DCMD is a promising technology to recover the diluted draw solution for FO process.

Keywords: membrane distillation, forward osmosis, draw solution, recovery

Procedia PDF Downloads 180
13119 Increased Cytolytic Activity of Effector T-Cells against Cholangiocarcinoma Cells by Self-Differentiated Dendritic Cells with Down-Regulation of Interleukin-10 and Transforming Growth Factor-β Receptors

Authors: Chutamas Thepmalee, Aussara Panya, Mutita Junking, Jatuporn Sujjitjoon, Nunghathai Sawasdee, Pa-Thai Yenchitsomanus

Abstract:

Cholangiocarcinoma (CCA) is an aggressive malignancy of bile duct epithelial cells in which the standard treatments, including surgery, radiotherapy, chemotherapy, and targeted therapy are partially effective. Many solid tumors including CCA escape host immune responses by creating tumor microenvironment and generating immunosuppressive cytokines such as interleukin-10 (IL-10) and transforming growth factor-β (TGF-β). These cytokines can inhibit dendritic cell (DC) differentiation and function, leading to decreased activation and response of effector CD4+ and CD8+ T cells for cancer cell elimination. To overcome the effects of these immunosuppressive cytokines and to increase ability of DC to activate effector CD4+ and CD8+ T cells, we generated self-differentiated DCs (SD-DCs) with down-regulation of IL-10 and TGF-β receptors for activation of effector CD4+ and CD8+ T cells. Human peripheral blood monocytes were initially transduced with lentiviral particles containing the genes encoding GM-CSF and IL-4 and then secondly transduced with lentiviral particles containing short-hairpin RNAs (shRNAs) to knock-down mRNAs of IL-10 and TGF-β receptors. The generated SD-DCs showed up-regulation of MHC class II (HLA-DR) and co-stimulatory molecules (CD40 and CD86), comparable to those of DCs generated by convention method. Suppression of IL-10 and TGF-β receptors on SD-DCs by specific shRNAs significantly increased levels of IFN-γ and also increased cytolytic activity of DC-activated effector T cells against CCA cell lines (KKU-213 and KKU-100), but it had little effect to immortalized cholangiocytes (MMNK-1). Thus, SD-DCs with down-regulation of IL-10 and TGF-β receptors increased activation of effector T cells, which is a recommended method to improve DC function for the preparation of DC-activated effector T cells for adoptive T-cell therapy.

Keywords: cholangiocarcinoma, IL-10 receptor, self-differentiated dendritic cells, TGF-β receptor

Procedia PDF Downloads 133
13118 Ophthalmic Self-Medication Practices and Associated Factors among Adult Ophthalmic Patients

Authors: Sarah Saad Alamer, Shujon Mohammed Alazzam, Amjad Khater Alanazi, Mohamed Ahmed Sankari, Jana Sameer Sendy, Saleh Al-Khaldi, Khaled Allam, Amani Badawi

Abstract:

Background: Self-medication is defined as the selection of medicines by individuals to treat self-diagnosed. There are a lot of concerns about the safety of long-term use of nonprescription ophthalmic drugs, which may lead to a variety of serious ocular complications. Topical steroids can produce severe eye-threatening complications, including the elevation of intraocular pressure (IOP) with possible development of glaucoma and infrequent optic nerve damage. In recent times, many OTC ophthalmic preparations have been possible without a prescription. Objective: In our study, we aimed to determine the prevalence of self-medication ocular topical steroid practice and associated factors among adult ophthalmic patients attending King Saud medical city. Methods: This study was conducted as a cross-sectional study, targeting participants aged 18 years old or above who had used topical steroids eye drops to determine the prevalence of self-medication ocular topical steroid practice and associated factors among adult patients attending ophthalmology clinic in King Saud Medical City (KSMC) in the central region. Results: A total of 308 responses, 92(29.8%) were using ocular topical, 58(18.8%) with prescription, 5(1.6%) without prescription, 29(9.4%) with and without prescription while 216(70.1%) did not use it. The frequency of using ocular topical steroids without a prescription among participants was 11(12%) once and 33 (35%) many times. 26(28.3%) were having complication, mostly 11(12.4%) eye infection, 8(9%) Glaucoma, 6 (6.7%) Cataracts. Reasons for self-medication ocular topical steroid practice among participants were 14 (15.2%) repeated symptoms, 11(15.2%) had heard an advice from a friend, 11 (15.2%) thought they had enough knowledge. Conclusion: Our study reveals that, even though detecting a high level of knowledge and acceptable practices and attitudes among participants, the incidence of self-medication with steroid eye drops was observed. This practice is mainly due to participants having repeated symptoms and thinking they have enough knowledge. Increasing the education level of patients on self-medication steroid eye drops practice and it is associated complications would help reduce the incidence of self-medication steroid eye drops practice.

Keywords: self-medication, ophthalmic medicine, steroid eye drop, over the counter

Procedia PDF Downloads 79
13117 Approach on Conceptual Design and Dimensional Synthesis of the Linear Delta Robot for Additive Manufacturing

Authors: Efrain Rodriguez, Cristhian Riano, Alberto Alvares

Abstract:

In recent years, robots manipulators with parallel architectures are used in additive manufacturing processes – 3D printing. These robots have advantages such as speed and lightness that make them suitable to help with the efficiency and productivity of these processes. Consequently, the interest for the development of parallel robots for additive manufacturing applications has increased. This article deals with the conceptual design and dimensional synthesis of the linear delta robot for additive manufacturing. Firstly, a methodology based on structured processes for the development of products through the phases of informational design, conceptual design and detailed design is adopted: a) In the informational design phase the Mudge diagram and the QFD matrix are used to aid a set of technical requirements, to define the form, functions and features of the robot. b) In the conceptual design phase, the functional modeling of the system through of an IDEF0 diagram is performed, and the solution principles for the requirements are formulated using a morphological matrix. This phase includes the description of the mechanical, electro-electronic and computational subsystems that constitute the general architecture of the robot. c) In the detailed design phase, a digital model of the robot is drawn on CAD software. A list of commercial and manufactured parts is detailed. Tolerances and adjustments are defined for some parts of the robot structure. The necessary manufacturing processes and tools are also listed, including: milling, turning and 3D printing. Secondly, a dimensional synthesis method applied on design of the linear delta robot is presented. One of the most important key factors in the design of a parallel robot is the useful workspace, which strongly depends on the joint space, the dimensions of the mechanism bodies and the possible interferences between these bodies. The objective function is based on the verification of the kinematic model for a prescribed cylindrical workspace, considering geometric constraints that possibly lead to singularities of the mechanism. The aim is to determine the minimum dimensional parameters of the mechanism bodies for the proposed workspace. A method based on genetic algorithms was used to solve this problem. The method uses a cloud of points with the cylindrical shape of the workspace and checks the kinematic model for each of the points within the cloud. The evolution of the population (point cloud) provides the optimal parameters for the design of the delta robot. The development process of the linear delta robot with optimal dimensions for additive manufacture is presented. The dimensional synthesis enabled to design the mechanism of the delta robot in function of the prescribed workspace. Finally, the implementation of the robotic platform developed based on a linear delta robot in an additive manufacturing application using the Fused Deposition Modeling (FDM) technique is presented.

Keywords: additive manufacturing, delta parallel robot, dimensional synthesis, genetic algorithms

Procedia PDF Downloads 180
13116 Value Chain Network: A Social Network Analysis of the Value Chain Actors of Recycled Polymer Products in Lagos Metropolis, Nigeria

Authors: Olamide Shittu, Olayinka Akanle

Abstract:

Value Chain Analysis is a common method of examining the stages involved in the production of a product, mostly agricultural produce, from the input to the consumption stage including the actors involved in each stage. However, the Functional Institutional Analysis is the most common method in literature employed to analyze the value chain of products. Apart from studying the relatively neglected phenomenon of recycled polymer products in Lagos Metropolis, this paper adopted the use of social network analysis to attempt a grounded theory of the nature of social network that exists among the value chain actors of the subject matter. The study adopted a grounded theory approach by conducting in-depth interviews, administering questionnaires and conducting observations among the identified value chain actors of recycled polymer products in Lagos Metropolis, Nigeria. The thematic analysis of the collected data gave the researchers the needed background to formulate a truly representative network of the social relationships among the value chain actors of recycled polymer products in Lagos Metropolis. The paper introduced concepts such as Transient and Perennial Social Ties to explain the observed social relations among the actors. Some actors have more social capital than others as a result of the structural holes that exist in their triad network. Households and resource recoverers are at disadvantaged position in the network as they have high constraints in their relationships with other actors. The study attempted to provide a new perspective in the study of the environmental value chain by analyzing the network of actors to bring about policy action points and improve recycling in Nigeria. Government and social entrepreneurs can exploit the structural holes that exist in the network for the socio-economic and sustainable development of the state.

Keywords: recycled polymer products, social network analysis, social ties, value chain analysis

Procedia PDF Downloads 402
13115 Experimental Modal Analysis of a Suspended Composite Beam

Authors: First A. Lahmar Lahbib, Second B. Abdeldjebar Rabiâ, Third C. Moudden B, forth D. Missoum L

Abstract:

Vibration tests are used to identify the elasticity modulus in two directions. This strategy is applied to composite materials glass / polyester. Experimental results made on a specimen in free vibration showed the efficiency of this method. Obtained results were validated by a comparison to results stemming from static tests.

Keywords: beam, characterization, composite, elasticity modulus, vibration.

Procedia PDF Downloads 456
13114 Expression of PGC-1 Alpha Isoforms in Response to Eccentric and Concentric Resistance Training in Healthy Subjects

Authors: Pejman Taghibeikzadehbadr

Abstract:

Background and Aim: PGC-1 alpha is a transcription factor that was first detected in brown adipose tissue. Since its discovery, PGC-1 alpha has been known to facilitate beneficial adaptations such as mitochondrial biogenesis and increased angiogenesis in skeletal muscle following aerobic exercise. Therefore, the purpose of this study was to investigate the expression of PGC-1 alpha isoforms in response to eccentric and concentric resistance training in healthy subjects. Materials and Methods: Ten healthy men were randomly divided into two groups (5 patients in eccentric group - 5 in eccentric group). Isokinetic contraction protocols included eccentric and concentric knee extension with maximum power and angular velocity of 60 degrees per second. The torques assigned to each subject were considered to match the workload in both protocols, with a rotational speed of 60 degrees per second. Contractions consisted of a maximum of 12 sets of 10 repetitions for the right leg, a rest time of 30 seconds between each set. At the beginning and end of the study, biopsy of the lateral broad muscle tissue was performed. Biopsies were performed in both distal and proximal directions of the lateral flank. To evaluate the expression of PGC1α-1 and PGC1α-4 genes, tissue analysis was performed in each group using Real-Time PCR technique. Data were analyzed using dependent t-test and covariance test. SPSS21 software and Exell 2013 software were used for data analysis. Results: The results showed that intra-group changes of PGC1α-1 after one session of activity were not significant in eccentric (p = 0.168) and concentric (p = 0.959) groups. Also, inter-group changes showed no difference between the two groups (p = 0.681). Also, intra-group changes of PGC1α-4 after one session of activity were significant in an eccentric group (p = 0.012) and concentric group (p = 0.02). Also, inter-group changes showed no difference between the two groups (p = 0.362). Conclusion: It seems that the lack of significant changes in the desired variables due to the lack of exercise pressure is sufficient to stimulate the increase of PGC1α-1 and PGC1α-4. And with regard to reviewing the answer, it seems that the compatibility debate has different results that need to be addressed.

Keywords: eccentric contraction, concentric contraction, PGC1α-1 و PGC1α-4, human subject

Procedia PDF Downloads 73
13113 Fabricating Method for Complex 3D Microfluidic Channel Using Soluble Wax Mold

Authors: Kyunghun Kang, Sangwoo Oh, Yongha Hwang

Abstract:

PDMS (Polydimethylsiloxane)-based microfluidic device has been recently applied to area of biomedical research, tissue engineering, and diagnostics because PDMS is low cost, nontoxic, optically transparent, gas-permeable, and especially biocompatible. Generally, PDMS microfluidic devices are fabricated by conventional soft lithography. Microfabrication requires expensive cleanroom facilities and a lot of time; however, only two-dimensional or simple three-dimensional structures can be fabricated. In this study, we introduce fabricating method for complex three-dimensional microfluidic channels using soluble wax mold. Using the 3D printing technique, we firstly fabricated three-dimensional mold which consists of soluble wax material. The PDMS pre-polymer is cast around, followed by PDMS casting and curing. The three-dimensional casting mold was removed from PDMS by chemically dissolved with methanol and acetone. In this work, two preliminary experiments were carried out. Firstly, the solubility of several waxes was tested using various solvents, such as acetone, methanol, hexane, and IPA. We found the combination between wax and solvent which dissolves the wax. Next, side effects of the solvent were investigated during the curing process of PDMS pre-polymer. While some solvents let PDMS drastically swell, methanol and acetone let PDMS swell only 2% and 6%, respectively. Thus, methanol and acetone can be used to dissolve wax in PDMS without any serious impact. Based on the preliminary tests, three-dimensional PDMS microfluidic channels was fabricated using the mold which was printed out using 3D printer. With the proposed fabricating technique, PDMS-based microfluidic devices have advantages of fast prototyping, low cost, optically transparence, as well as having complex three-dimensional geometry. Acknowledgements: This research was supported by Supported by a Korea University Grant and Basic Science Research Program through the National Research Foundation of Korea(NRF).

Keywords: microfluidic channel, polydimethylsiloxane, 3D printing, casting

Procedia PDF Downloads 268
13112 A Low Cost Education Proposal Using Strain Gauges and Arduino to Develop a Balance

Authors: Thais Cavalheri Santos, Pedro Jose Gabriel Ferreira, Alexandre Daliberto Frugoli, Lucio Leonardo, Pedro Americo Frugoli

Abstract:

This paper presents a low cost education proposal to be used in engineering courses. The engineering education in universities of a developing country that is in need of an increasing number of engineers carried out with quality and affordably, pose a difficult problem to solve. In Brazil, the political and economic scenario requires academic managers able to reduce costs without compromising the quality of education. Within this context, the elaboration of a physics principles teaching method with the construction of an electronic balance is proposed. First, a method to develop and construct a load cell through which the students can understand the physical principle of strain gauges and bridge circuit will be proposed. The load cell structure was made with aluminum 6351T6, in dimensions of 80 mm x 13 mm x 13 mm and for its instrumentation, a complete Wheatstone Bridge was assembled with strain gauges of 350 ohms. Additionally, the process involves the use of a software tool to document the prototypes (design circuits), the conditioning of the signal, a microcontroller, C language programming as well as the development of the prototype. The project also intends to use an open-source I/O board (Arduino Microcontroller). To design the circuit, the Fritizing software will be used and, to program the controller, an open-source software named IDE®. A load cell was chosen because strain gauges have accuracy and their use has several applications in the industry. A prototype was developed for this study, and it confirmed the affordability of this educational idea. Furthermore, the goal of this proposal is to motivate the students to understand the several possible applications in high technology of the use of load cells and microcontroller.

Keywords: Arduino, load cell, low-cost education, strain gauge

Procedia PDF Downloads 295
13111 Gaming Mouse Redesign Based on Evaluation of Pragmatic and Hedonic Aspects of User Experience

Authors: Thedy Yogasara, Fredy Agus

Abstract:

In designing a product, it is currently crucial to focus not only on the product’s usability based on performance measures, but also on user experience (UX) that includes pragmatic and hedonic aspects of product use. These aspects play a significant role in fulfillment of user needs, both functionally and psychologically. Pragmatic quality refers to as product’s perceived ability to support the fulfillment of behavioral goals. It is closely linked to functionality and usability of the product. In contrast, hedonic quality is product’s perceived ability to support the fulfillment of psychological needs. Hedonic quality relates to the pleasure of ownership and use of the product, including stimulation for personal development and communication of user’s identity to others through the product. This study evaluates the pragmatic and hedonic aspects of gaming mice G600 and Razer Krait using AttrakDiff tool to create an improved design that is able to generate positive UX. AttrakDiff is a method that measures pragmatic and hedonic scores of a product with a scale between -3 to +3 through four attributes (i.e. Pragmatic Quality, Hedonic Quality-Identification, Hedonic Quality-Stimulation, and Attractiveness), represented by 28 pairs of opposite words. Based on data gathered from 15 participants, it is identified that gaming mouse G600 needs to be redesigned because of its low grades (pragmatic score: -0.838, hedonic score: 1, attractiveness score: 0.771). The redesign process focuses on the attributes with poor scores and takes into account improvement suggestions collected from interview with the participants. The redesigned mouse G600 is evaluated using the previous method. The result shows higher scores in pragmatic quality (1.929), hedonic quality (1.703), and attractiveness (1.667), indicating that the redesigned mouse is more capable of creating pleasurable experience of product use.

Keywords: AttrakDiff, hedonic aspect, pragmatic aspect, product design, user experience

Procedia PDF Downloads 148
13110 A Paradigm Shift in Patent Protection-Protecting Methods of Doing Business: Implications for Economic Development in Africa

Authors: Odirachukwu S. Mwim, Tana Pistorius

Abstract:

Since the early 1990s political and economic pressures have been mounted on policy and law makers to increase patent protection by raising the protection standards. The perception of the relation between patent protection and development, particularly economic development, has evolved significantly in the past few years. Debate on patent protection in the international arena has been significantly influenced by the perception that there is a strong link between patent protection and economic development. The level of patent protection determines the extent of development that can be achieved. Recently there has been a paradigm shift with a lot of emphasis on extending patent protection to method of doing business generally referred to as Business Method Patenting (BMP). The general perception among international organizations and the private sectors also indicates that there is a strong correlation between BMP protection and economic growth. There are two diametrically opposing views as regards the relation between Intellectual Property (IP) protection and development and innovation. One school of thought promotes the view that IP protection improves economic development through stimulation of innovation and creativity. The other school advances the view that IP protection is unnecessary for stimulation of innovation and creativity and is in fact a hindrance to open access to resources and information required for innovative and creative modalities. Therefore, different theories and policies attach different levels of protection to BMP which have specific implications for economic growth. This study examines the impact of BMP protection on development by focusing on the challenges confronting economic growth in African communities as a result of the new paradigm in patent law. (Africa is used as a single unit in this study but this should not be construed as African homogeneity. Rather, the views advanced in this study are used to address the common challenges facing many communities in Africa). The study reviews (from the point of views of legal philosophers, policy makers and decisions of competent courts) the relevant literature, patent legislation particularly the International Treaty, policies and legal judgments. Findings from this study suggest that over and above the various criticisms levelled against the extreme liberal approach to the recognition of business methods as patentable subject matter, there are other specific implications that are associated with such approach. The most critical implication of extending patent protection to business methods is the locking-up of knowledge which may hamper human development in general and economic development in particular. Locking up knowledge necessary for economic advancement and competitiveness may have a negative effect on economic growth by promoting economic exclusion, particularly in African communities. This study suggests that knowledge of BMP within the African context and the extent of protection linked to it is crucial in achieving a sustainable economic growth in Africa. It also suggests that a balance is struck between the two diametrically opposing views.

Keywords: Africa, business method patenting, economic growth, intellectual property, patent protection

Procedia PDF Downloads 119
13109 A Comparative Study of Cognitive Functions in Relapsing-Remitting Multiple Sclerosis Patients, Secondary-Progressive Multiple Sclerosis Patients and Normal People

Authors: Alireza Pirkhaefi

Abstract:

Background: Multiple sclerosis (MS) is one of the most common diseases of the central nervous system (brain and spinal cord). Given the importance of cognitive disorders in patients with multiple sclerosis, the present study was in order to compare cognitive functions (Working memory, Attention and Centralization, and Visual-spatial perception) in patients with relapsing- remitting multiple sclerosis (RRMS) and secondary progressive multiple sclerosis (SPMS). Method: Present study was performed as a retrospective study. This research was conducted with Ex-Post Facto method. The samples of research consisted of 60 patients with multiple sclerosis (30 patients relapsing-retrograde and 30 patients secondary progressive), who were selected from Tehran Community of MS Patients Supported as convenience sampling. 30 normal persons were also selected as a comparison group. Montreal Cognitive Assessment (MOCA) was used to assess cognitive functions. Data were analyzed using multivariate analysis of variance. Results: The results showed that there were significant differences among cognitive functioning in patients with RRMS, SPMS, and normal individuals. There were not significant differences in working memory between two groups of patients with RRMS and SPMS; while significant differences in these variables were seen between the two groups and normal individuals. Also, results showed significant differences in attention and centralization and visual-spatial perception among three groups. Conclusions: Results showed that there are differences between cognitive functions of RRMS and SPMS patients so that the functions of RRMS patients are better than SPMS patients. These results have a critical role in improvement of cognitive functions; reduce the factors causing disability due to cognitive impairment, and especially overall health of society.

Keywords: multiple sclerosis, cognitive function, secondary-progressive, normal subjects

Procedia PDF Downloads 232
13108 Machine Learning Techniques in Bank Credit Analysis

Authors: Fernanda M. Assef, Maria Teresinha A. Steiner

Abstract:

The aim of this paper is to compare and discuss better classifier algorithm options for credit risk assessment by applying different Machine Learning techniques. Using records from a Brazilian financial institution, this study uses a database of 5,432 companies that are clients of the bank, where 2,600 clients are classified as non-defaulters, 1,551 are classified as defaulters and 1,281 are temporarily defaulters, meaning that the clients are overdue on their payments for up 180 days. For each case, a total of 15 attributes was considered for a one-against-all assessment using four different techniques: Artificial Neural Networks Multilayer Perceptron (ANN-MLP), Artificial Neural Networks Radial Basis Functions (ANN-RBF), Logistic Regression (LR) and finally Support Vector Machines (SVM). For each method, different parameters were analyzed in order to obtain different results when the best of each technique was compared. Initially the data were coded in thermometer code (numerical attributes) or dummy coding (for nominal attributes). The methods were then evaluated for each parameter and the best result of each technique was compared in terms of accuracy, false positives, false negatives, true positives and true negatives. This comparison showed that the best method, in terms of accuracy, was ANN-RBF (79.20% for non-defaulter classification, 97.74% for defaulters and 75.37% for the temporarily defaulter classification). However, the best accuracy does not always represent the best technique. For instance, on the classification of temporarily defaulters, this technique, in terms of false positives, was surpassed by SVM, which had the lowest rate (0.07%) of false positive classifications. All these intrinsic details are discussed considering the results found, and an overview of what was presented is shown in the conclusion of this study.

Keywords: artificial neural networks (ANNs), classifier algorithms, credit risk assessment, logistic regression, machine Learning, support vector machines

Procedia PDF Downloads 97
13107 Strategies For Management Of Massive Intraoperative Airway Haemorrhage Complicating Surgical Pulmonary Embolectomy

Authors: Nicholas Bayfield, Liam Bibo, Kaushelandra Rathore, Lucas Sanders, Mark Newman

Abstract:

INTRODUCTION: Surgical pulmonary embolectomy is an established therapy for acute pulmonary embolism causing right heart dysfunction and haemodynamic instability. Massive intraoperative airway haemorrhage is a rare complication of pulmonary embolectomy. We present our institutional experience with massive airway haemorrhage complicating pulmonary embolectomy and discuss optimal therapeutic strategies. METHODS: A retrospective review of emergent surgical pulmonary embolectomy patients was undertaken. Cases complicated by massive intra-operative airway haemorrhage were identified. Intra- and peri-operative management strategies were analysed and discussed. RESULTS: Of 76 patients undergoing emergent or salvage pulmonary embolectomy, three cases (3.9%) of massive intraoperative airway haemorrhage were identified. Haemorrhage always began on weaning from cardiopulmonary bypass. Successful management strategies involved intraoperative isolation of the side of bleeding, occluding the affected airway with an endobronchial blocker, institution of veno-arterial (VA) extracorporeal membrane oxygenation (ECMO) and reversal of anticoagulation. Running the ECMO without heparinisation allows coagulation to occur. Airway haemorrhage was controlled within 24 hours of operation in all patients, allowing re-institution of dual lung ventilation and decannulation from ECMO. One case in which positive end-expiratory airway pressure was trialled initially was complicated by air embolism. Although airway haemorrhage was controlled successfully in all cases, all patients died in-hospital for reasons unrelated to the airway haemorrhage. CONCLUSION: Massive intraoperative airway haemorrhage during pulmonary embolectomy is a rare complication with potentially catastrophic outcomes. Re-perfusion alveolar and capillary injury is the likely aetiology. With a systematic approach to management, airway haemorrhage can be well controlled intra-operatively and often resolves within 24 hours. Stopping blood flow to the pulmonary arteries and support of oxygenation by the institution of VA ECMO is important. This management has been successful in our 3 cases.

Keywords: pulmonary embolectomy, cardiopulmonary bypass, cardiac surgery, pulmonary embolism

Procedia PDF Downloads 173
13106 Investigation of the Technological Demonstrator 14x B in Different Angle of Attack in Hypersonic Velocity

Authors: Victor Alves Barros Galvão, Israel Da Silveira Rego, Antonio Carlos Oliveira, Paulo Gilberto De Paula Toro

Abstract:

The Brazilian hypersonic aerospace vehicle 14-X B, VHA 14-X B, is a vehicle integrated with the hypersonic airbreathing propulsion system based on supersonic combustion (scramjet), developing in Aerothermodynamics and hypersonic Prof. Henry T. Nagamatsu Laboratory, to conduct demonstration in atmospheric flight at the speed corresponding to Mach number 7 at an altitude of 30km. In the experimental procedure the hypersonic shock tunnel T3 was used, installed in that laboratory. This device simulates the flow over a model is fixed in the test section and can also simulate different atmospheric conditions. The scramjet technology offers substantial advantages to improve aerospace vehicle performance which flies at a hypersonic speed through the Earth's atmosphere by reducing fuel consumption on board. Basically, the scramjet is an aspirated aircraft engine fully integrated that uses oblique/conic shock waves generated during hypersonic flight, to promote the deceleration and compression of atmospheric air in scramjet inlet. During the hypersonic flight, the vehicle VHA 14-X will suffer atmospheric influences, promoting changes in the vehicle's angles of attack (angle that the mean line of vehicle makes with respect to the direction of the flow). Based on this information, a study is conducted to analyze the influences of changes in the vehicle's angle of attack during the atmospheric flight. Analytical theoretical analysis, simulation computational fluid dynamics and experimental investigation are the methodologies used to design a technological demonstrator prior to the flight in the atmosphere. This paper considers analysis of the thermodynamic properties (pressure, temperature, density, sound velocity) in lower surface of the VHA 14-X B. Also, it considers air as an ideal gas and chemical equilibrium, with and without boundary layer, considering changes in the vehicle's angle of attack (positive and negative in relation to the flow) and bi-dimensional expansion wave theory at the expansion section (Theory of Prandtl-Meyer).

Keywords: angle of attack, experimental hypersonic, hypersonic airbreathing propulsion, Scramjet

Procedia PDF Downloads 399
13105 The Phenomenon of Rockfall in the Traceca Corridor and the Choice of Engineering Measures to Combat It

Authors: I. Iremashvili, I. Pirtskhalaishvili, K. Kiknadze, F. Lortkipanidze

Abstract:

The paper deals with the causes of rockfall and its possible consequences on slopes adjacent to motorways and railways. A list of measures is given that hinder rockfall; these measures are directed at protecting roads from rockfalls, and not preventing them. From the standpoint of local stability of slopes the main effective measure is perhaps strengthening their surface by the method of filling, which will check or end (or both) the process of deformation, local slipping off, sliding off and development of erosion.

Keywords: rockfall, concrete spraying, heliodevices, railways

Procedia PDF Downloads 370
13104 Amblyopia and Eccentric Fixation

Authors: Kristine Kalnica-Dorosenko, Aiga Svede

Abstract:

Amblyopia or 'lazy eye' is impaired or dim vision without obvious defect or change in the eye. It is often associated with abnormal visual experience, most commonly strabismus, anisometropia or both, and form deprivation. The main task of amblyopia treatment is to ameliorate etiological factors to create a clear retinal image and, to ensure the participation of the amblyopic eye in the visual process. The treatment of amblyopia and eccentric fixation is usually associated with problems in the therapy. Eccentric fixation is present in around 44% of all patients with amblyopia and in 30% of patients with strabismic amblyopia. In Latvia, amblyopia is carefully treated in various clinics, but eccentricity diagnosis is relatively rare. Conflict which has developed relating to the relationship between the visual disorder and the degree of eccentric fixation in amblyopia should to be rethoughted, because it has an important bearing on the cause and treatment of amblyopia, and the role of the eccentric fixation in this case. Visuoscopy is the most frequently used method for determination of eccentric fixation. With traditional visuoscopy, a fixation target is projected onto the patient retina, and the examiner asks to look straight directly at the center of the target. An optometrist then observes the point on the macula used for fixation. This objective test provides clinicians with direct observation of the fixation point of the eye. It requires patients to voluntarily fixate the target and assumes the foveal reflex accurately demarcates the center of the foveal pit. In the end, by having a very simple method to evaluate fixation, it is possible to indirectly evaluate treatment improvement, as eccentric fixation is always associated with reduced visual acuity. So, one may expect that if eccentric fixation in amlyopic eye is found with visuoscopy, then visual acuity should be less than 1.0 (in decimal units). With occlusion or another amblyopia therapy, one would expect both visual acuity and fixation to improve simultaneously, that is fixation would become more central. Consequently, improvement in fixation pattern by treatment is an indirect measurement of improvement of visual acuity. Evaluation of eccentric fixation in the child may be helpful in identifying amblyopia in children prior to measurement of visual acuity. This is very important because the earlier amblyopia is diagnosed – the better the chance of improving visual acuity.

Keywords: amblyopia, eccentric fixation, visual acuity, visuoscopy

Procedia PDF Downloads 153
13103 Comparison of Different Reanalysis Products for Predicting Extreme Precipitation in the Southern Coast of the Caspian Sea

Authors: Parvin Ghafarian, Mohammadreza Mohammadpur Panchah, Mehri Fallahi

Abstract:

Synoptic patterns from surface up to tropopause are very important for forecasting the weather and atmospheric conditions. There are many tools to prepare and analyze these maps. Reanalysis data and the outputs of numerical weather prediction models, satellite images, meteorological radar, and weather station data are used in world forecasting centers to predict the weather. The forecasting extreme precipitating on the southern coast of the Caspian Sea (CS) is the main issue due to complex topography. Also, there are different types of climate in these areas. In this research, we used two reanalysis data such as ECMWF Reanalysis 5th Generation Description (ERA5) and National Centers for Environmental Prediction /National Center for Atmospheric Research (NCEP/NCAR) for verification of the numerical model. ERA5 is the latest version of ECMWF. The temporal resolution of ERA5 is hourly, and the NCEP/NCAR is every six hours. Some atmospheric parameters such as mean sea level pressure, geopotential height, relative humidity, wind speed and direction, sea surface temperature, etc. were selected and analyzed. Some different type of precipitation (rain and snow) was selected. The results showed that the NCEP/NCAR has more ability to demonstrate the intensity of the atmospheric system. The ERA5 is suitable for extract the value of parameters for specific point. Also, ERA5 is appropriate to analyze the snowfall events over CS (snow cover and snow depth). Sea surface temperature has the main role to generate instability over CS, especially when the cold air pass from the CS. Sea surface temperature of NCEP/NCAR product has low resolution near coast. However, both data were able to detect meteorological synoptic patterns that led to heavy rainfall over CS. However, due to the time lag, they are not suitable for forecast centers. The application of these two data is for research and verification of meteorological models. Finally, ERA5 has a better resolution, respect to NCEP/NCAR reanalysis data, but NCEP/NCAR data is available from 1948 and appropriate for long term research.

Keywords: synoptic patterns, heavy precipitation, reanalysis data, snow

Procedia PDF Downloads 110
13102 Well-being of Lagos Urban Mini-bus Drivers: The Influence of Age and Marital Status

Authors: Bolajoko I. Malomo, Maryam O. Yusuf

Abstract:

Lagos urban mini-bus drivers play a critical role in the transportation sector. The current major mode of transportation within Lagos metropolis remains road transportation and this confirms the relevance of urban mini-bus drivers in transporting the populace to their various destinations. Other modes of transportation such as the train and waterways are currently inadequate. Various threats to the well-being of urban bus drivers include congested traffic typical of modern day lifestyles, dwindling financial returns due to long hours in traffic, fewer hours of sleep, inadequate diet, time pressure, and assaults related to fare disputes. Several health-related problems have been documented to be associated with urban bus driving. For instance, greater rates of hypertension, obesity and cholesterol level has been reported. Research studies are yet to identify the influence of age and marital status on the well-being of urban mini-bus drivers in Lagos metropolis. A study of this nature is necessary as it is culturally perceived in Nigeria that older and married people are especially influenced by family affiliation and would behave in ways that would project positive outcomes. The study sample consisted of 150 urban mini-bus drivers who were conveniently sampled from six (6) different terminuses where their journey begins and terminates. The well-being questionnaire was administered to participants. The criteria for inclusion in the study included the ability to read in English language and the confirmation that interested participants were on duty and suited to be driving mini-buses. Due to the nature of the job of bus driving, the researcher administered the questionnaires on participants who were free and willing to respond to the survey. All participants were males of various age groups and of different marital statuses. Results of analyses conducted revealed no significant influence of age and marital status on the well-being of urban mini-bus drivers. This indicates that the well-being of urban mini-bus drivers is not influenced by age nor marital status. The findings of this study have cultural implications. It negates the popularly held belief that older and married people care more about their well-being than younger and single people. It brings to fore the need to also identify and consider other factors when certifying people for the job of urban bus driving.

Keywords: age, Lagos metropolis, marital status, well-being of urban mini bus drivers

Procedia PDF Downloads 426
13101 Lexical Based Method for Opinion Detection on Tripadvisor Collection

Authors: Faiza Belbachir, Thibault Schienhinski

Abstract:

The massive development of online social networks allows users to post and share their opinions on various topics. With this huge volume of opinion, it is interesting to extract and interpret these information for different domains, e.g., product and service benchmarking, politic, system of recommendation. This is why opinion detection is one of the most important research tasks. It consists on differentiating between opinion data and factual data. The difficulty of this task is to determine an approach which returns opinionated document. Generally, there are two approaches used for opinion detection i.e. Lexical based approaches and Machine Learning based approaches. In Lexical based approaches, a dictionary of sentimental words is used, words are associated with weights. The opinion score of document is derived by the occurrence of words from this dictionary. In Machine learning approaches, usually a classifier is trained using a set of annotated document containing sentiment, and features such as n-grams of words, part-of-speech tags, and logical forms. Majority of these works are based on documents text to determine opinion score but dont take into account if these texts are really correct. Thus, it is interesting to exploit other information to improve opinion detection. In our work, we will develop a new way to consider the opinion score. We introduce the notion of trust score. We determine opinionated documents but also if these opinions are really trustable information in relation with topics. For that we use lexical SentiWordNet to calculate opinion and trust scores, we compute different features about users like (numbers of their comments, numbers of their useful comments, Average useful review). After that, we combine opinion score and trust score to obtain a final score. We applied our method to detect trust opinions in TRIPADVISOR collection. Our experimental results report that the combination between opinion score and trust score improves opinion detection.

Keywords: Tripadvisor, opinion detection, SentiWordNet, trust score

Procedia PDF Downloads 192
13100 Damage-Based Seismic Design and Evaluation of Reinforced Concrete Bridges

Authors: Ping-Hsiung Wang, Kuo-Chun Chang

Abstract:

There has been a common trend worldwide in the seismic design and evaluation of bridges towards the performance-based method where the lateral displacement or the displacement ductility of bridge column is regarded as an important indicator for performance assessment. However, the seismic response of a bridge to an earthquake is a combined result of cyclic displacements and accumulated energy dissipation, causing damage to the bridge, and hence the lateral displacement (ductility) alone is insufficient to tell its actual seismic performance. This study aims to propose a damage-based seismic design and evaluation method for reinforced concrete bridges on the basis of the newly developed capacity-based inelastic displacement spectra. The capacity-based inelastic displacement spectra that comprise an inelastic displacement ratio spectrum and a corresponding damage state spectrum was constructed by using a series of nonlinear time history analyses and a versatile, smooth hysteresis model. The smooth model could take into account the effects of various design parameters of RC bridge columns and correlates the column’s strength deterioration with the Park and Ang’s damage index. It was proved that the damage index not only can be used to accurately predict the onset of strength deterioration, but also can be a good indicator for assessing the actual visible damage condition of column regardless of its loading history (i.e., similar damage index corresponds to similar actual damage condition for the same designed columns subjected to very different cyclic loading protocols as well as earthquake loading), providing a better insight into the seismic performance of bridges. Besides, the computed spectra show that the inelastic displacement ratio for far-field ground motions approximately conforms to the equal displacement rule when structural period is larger than around 0.8 s, but that for near-fault ground motions departs from the rule in the whole considered spectral regions. Furthermore, the near-fault ground motions would lead to significantly greater inelastic displacement ratio and damage index than far-field ground motions and most of the practical design scenarios cannot survive the considered near-fault ground motion when the strength reduction factor of bridge is not less than 5.0. Finally, the spectrum formula is presented as a function of structural period, strength reduction factor, and various column design parameters for far-field and near-fault ground motions by means of the regression analysis of the computed spectra. And based on the developed spectrum formula, a design example of a bridge is presented to illustrate the proposed damage-based seismic design and evaluation method where the damage state of the bridge is used as the performance objective.

Keywords: damage index, far-field, near-fault, reinforced concrete bridge, seismic design and evaluation

Procedia PDF Downloads 120
13099 Low Temperature Solution Processed Solar Cell Based on ITO/PbS/PbS:Bi3+ Heterojunction

Authors: M. Chavez, H. Juarez, M. Pacio, O. Portillo

Abstract:

PbS chemical bath heterojunction sollar cells have shown significant improvements in performance. Here we demonstrate a solar cell based on the heterojunction formed between PbS layer and PbS:Bi3+ thin films that are deposited via solution process at 40°C. The device achieve an current density of 4 mA/cm2. The simple and low-cost deposition method of PbS:Bi3+ films is promising for the fabrication.

Keywords: PbS doped, Bismuth, solar cell, thin films

Procedia PDF Downloads 546
13098 Automatic Near-Infrared Image Colorization Using Synthetic Images

Authors: Yoganathan Karthik, Guhanathan Poravi

Abstract:

Colorizing near-infrared (NIR) images poses unique challenges due to the absence of color information and the nuances in light absorption. In this paper, we present an approach to NIR image colorization utilizing a synthetic dataset generated from visible light images. Our method addresses two major challenges encountered in NIR image colorization: accurately colorizing objects with color variations and avoiding over/under saturation in dimly lit scenes. To tackle these challenges, we propose a Generative Adversarial Network (GAN)-based framework that learns to map NIR images to their corresponding colorized versions. The synthetic dataset ensures diverse color representations, enabling the model to effectively handle objects with varying hues and shades. Furthermore, the GAN architecture facilitates the generation of realistic colorizations while preserving the integrity of dimly lit scenes, thus mitigating issues related to over/under saturation. Experimental results on benchmark NIR image datasets demonstrate the efficacy of our approach in producing high-quality colorizations with improved color accuracy and naturalness. Quantitative evaluations and comparative studies validate the superiority of our method over existing techniques, showcasing its robustness and generalization capability across diverse NIR image scenarios. Our research not only contributes to advancing NIR image colorization but also underscores the importance of synthetic datasets and GANs in addressing domain-specific challenges in image processing tasks. The proposed framework holds promise for various applications in remote sensing, medical imaging, and surveillance where accurate color representation of NIR imagery is crucial for analysis and interpretation.

Keywords: computer vision, near-infrared images, automatic image colorization, generative adversarial networks, synthetic data

Procedia PDF Downloads 36
13097 Evaluation of Different Anticoagulant Effects on Flow Properties of Human Blood Using Falling Needle Rheometer

Authors: Hiroki Tsuneda, Takamasa Suzuki, Hideki Yamamoto, Kimito Kawamura, Eiji Tamura, Katharina Wochner, Roberto Plasenzotti

Abstract:

Flow property of human blood is one of the important factors on the prevention of the circulatory condition such as a high blood pressure, a diabetes mellitus, and a cardiac infarction. However, the measurement of flow property of human blood, especially blood viscosity, is not so easy, because of their coagulation or aggregation behaviors after taking a sample from blood vessel. In the experiment, some kinds of anticoagulant were added into the human blood to avoid its solidification. Anticoagulant used in the blood test has been chosen for each purpose of blood test, for anticoagulant effect on blood is different mechanism for each. So that, there is a problem that the evaluation of measured blood property with different anticoagulant is so difficult. Therefore, it is so important to make clear the difference of anticoagulant effect on the blood property. In the previous work, a compact-size falling needle rheometer (FNR) has been developed in order to measure the flow property of human blood such as a flow curve, an apparent viscosity. It was found that FNR system can apply to a rheometer or a viscometry for various experimental conditions for not only human blood but also mammalians blood. In this study, the measurements of human blood viscosity with different anticoagulant (EDTA and Heparin) were carried out using newly developed FNR system. The effect of anticoagulant on blood viscosity was also tested by using the standard liquid for each. The accuracy on the viscometry was also tested by using the standard liquid for calibrating materials (JS-10, JS-20) and observed data have satisfactory agreement with reference data around 1.0% at 310K. The flow curve of six males and females with different anticoagulant were measured using FNR. In this experiment, EDTA and Heparin were chosen as anticoagulant for blood. Heparin can inhibit the coagulation of human blood by activating the body of anti-thrombin. To examine the effect of human blood viscosity on anticoagulant, flow curve was measured at high shear rate (>350s-1), and apparent viscosity of each person were determined with different anticoagulant. The apparent viscosity of human blood with heparin was 2%-9% higher than that with EDTA. However, the difference of blood viscosity for two anticoagulants for same blood was different for each. Further discussion, we need the consideration of effect on other physical property, such as cellular component and plasma component.

Keywords: falling-needle rheometer, human blood, viscosity, anticoagulant

Procedia PDF Downloads 436
13096 A Gamification Teaching Method for Software Measurement Process

Authors: Lennon Furtado, Sandro Oliveira

Abstract:

The importance of an effective measurement program lies in the ability to control and predict what can be measured. Thus, the measurement program has the capacity to provide bases in decision-making to support the interests of an organization. Therefore, it is only possible to apply for an effective measurement program with a team of software engineers well trained in the measurement area. However, the literature indicates that are few computer science courses that have in their program the teaching of the software measurement process. And even these, generally present only basic theoretical concepts of said process and little or no measurement in practice, which results in the student's lack of motivation to learn the measurement process. In this context, according to some experts in software process improvements, one of the most used approaches to maintaining the motivation and commitment to software process improvements program is the use of the gamification. Therefore, this paper aims to present a proposal of teaching the measurement process by gamification. Which seeks to improve student motivation and performance in the assimilation of tasks related to software measurement, by incorporating elements of games into the practice of measurement process, making it more attractive for learning. And as a way of validating the proposal will be made a comparison between two distinct groups of 20 students of Software Quality class, a control group, and an experiment group. The control group will be the students that will not make use of the gamification proposal to learn software measurement process, while the experiment group, will be the students that will make use of the gamification proposal to learn software measurement process. Thus, this paper will analyze the objective and subjective results of each group. And as objective result will be analyzed the student grade reached at the end of the course, and as subjective results will be analyzed a post-course questionnaire with the opinion of each student about the teaching method. Finally, this paper aims to prove or refute the following hypothesis: If the gamification proposal to teach software measurement process does appropriate motivate the student, in order to attribute the necessary competence to the practical application of the measurement process.

Keywords: education, gamification, software measurement process, software engineering

Procedia PDF Downloads 310
13095 Human Pressure Threaten Swayne’s Hartebeest to Point of Local Extinction from the Savannah Plains of Nech Sar National Park, South Rift Valley, Ethiopia

Authors: Simon Shibru, Karen Vancampenhout, Jozef Deckers, Herwig Leirs

Abstract:

We investigated the population size of the endemic and endangered Swayne’s Hartebeest (Alcelaphus buselaphus swaynei) in Nech Sar National Park from 2012 to 2014 and document the major threats why the species is on the verge of local extinction. The park was once known for its abundant density of Swayne’s Hartebeest. We used direct total count methods for a census. We administered semi-structured interviews and open-ended questionnaires with senior scouts who are the member of the local communities. Historical records were obtained to evaluate the population trends of the animals since 1974. The density of the animal decreased from 65 in 1974 to 1 individual per 100 km2 in 2014 with a decline of 98.5% in the past 40 years. The respondents agreed that the conservation status of the park was in its worst condition ever now with only 2 Swayne’s Hartebeest left, with a rapid decline from 4 individuals in 2012 and 12 individuals in 2009. Mainly hunting and habitat loss, but also the unsuitable season of reproduction and shortage of forage as minor factors were identified as threats for a local extinction of the Swayne’s Hartebeests. On the other hand, predation, fire, disease, and ticks were not considered a cause for the declining trend. Hunting happens mostly out of some kind of revenge since the local community thought that they were pushed out from the land because of the presence of Swayne's Hartebeest in the area. Respondents agreed that the revenge action of the local communities was in response to their unwillingness to be displaced from the park in 1982/3. This conflict situation is resulting from the exclusionary wildlife management policy of the country. We conclude that the human interventions in general and illegal hunting, in particular, pushed the Swayne’s Hartebeest to a point of local extinction. Therefore, we recommend inclusive wildlife management approach for continuing existence of the park together with its natural resources so that sustainable use of the resources is in place.

Keywords: hunting, habitat destruction, local extinction, Nech Sar National Park, Swayne’s Hartebeest

Procedia PDF Downloads 464
13094 Uniqueness of Fingerprint Biometrics to Human Dynasty: A Review

Authors: Siddharatha Sharma

Abstract:

With the advent of technology and machines, the role of biometrics in society is taking an important place for secured living. Security issues are the major concern in today’s world and continue to grow in intensity and complexity. Biometrics based recognition, which involves precise measurement of the characteristics of living beings, is not a new method. Fingerprints are being used for several years by law enforcement and forensic agencies to identify the culprits and apprehend them. Biometrics is based on four basic principles i.e. (i) uniqueness, (ii) accuracy, (iii) permanency and (iv) peculiarity. In today’s world fingerprints are the most popular and unique biometrics method claiming a social benefit in the government sponsored programs. A remarkable example of the same is UIDAI (Unique Identification Authority of India) in India. In case of fingerprint biometrics the matching accuracy is very high. It has been observed empirically that even the identical twins also do not have similar prints. With the passage of time there has been an immense progress in the techniques of sensing computational speed, operating environment and the storage capabilities and it has become more user convenient. Only a small fraction of the population may be unsuitable for automatic identification because of genetic factors, aging, environmental or occupational reasons for example workers who have cuts and bruises on their hands which keep fingerprints changing. Fingerprints are limited to human beings only because of the presence of volar skin with corrugated ridges which are unique to this species. Fingerprint biometrics has proved to be a high level authentication system for identification of the human beings. Though it has limitations, for example it may be inefficient and ineffective if ridges of finger(s) or palm are moist authentication becomes difficult. This paper would focus on uniqueness of fingerprints to the human beings in comparison to other living beings and review the advancement in emerging technologies and their limitations.

Keywords: fingerprinting, biometrics, human beings, authentication

Procedia PDF Downloads 316
13093 Effect of Cellulase Pretreatment for n-Hexane Extraction of Oil from Garden Cress Seeds

Authors: Boutemak Khalida, Dahmani Siham

Abstract:

Garden cress (Lepidium Sativum L.) belonging to the family Brassicaceae, is edible growing annual herb. Its various parts (roots, leaves and seeds) have been used to treat various human ailments. Its seed extracts have been screened for various biological activities like hypotensive, antimicrobial, bronchodilator, hypoglycaemic and antianemic. The aim of the present study is to optimize the process parameters (cellulase concentration and incubation time) of enzymatic pre-treatment of the garden cress seeds and to evaluate the effect of cellulase pre-treatment of the crushed seeds on the oil yield, physico-chemical properties and antibacterial activity and comparing to non-enzymatic method. The optimum parameters of cellulase pre-treatment were as follows: cellulase of 0,1% w/w and incubation time of 2h. After enzymatic pre-treatment, the oil was extracted by n-hexane for 1.5 h, the oil yield was 4,01% for cellulase pre-treatment as against 10,99% in the control sample. The decrease in yield might be caused a result of mucilage. Garden cress seeds are covered with a layer of mucilage which gels on contact with water. At the same time, the antibacterial activity was carried out using agar diffusion method against 4 food-borne pathogens (Escherichia coli, Salmonella typhi,Staphylococcus aureus, Bacillus subtilis). The results showed that bacterial strains are very sensitive to the oil with cellulase pre-treatment. Staphylococcus aureus is extremely sensitive with the largest zone of inhibition (40 mm), Escherichia coli and salmonella typhi had a very sensitive to the oil with a zone of inhibition (26 mm). Bacillus subtilizes is averagely sensitive which gave an inhibition of 16 mm. But it does not exhibit sensivity to the oil without enzymatic pre-treatment with a zone inhibition (< 8 mm). Enzymatic pre-treatment could be useful for antimicrobial activity of the oil, and hold a good potential for use in food and pharmaceutical industries.

Keywords: Lepidium sativum L., cellulase, enzymatic pretreatment, antibacterial activity.

Procedia PDF Downloads 453