Search results for: normal tension glaucoma
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3312

Search results for: normal tension glaucoma

3012 Creative Mathematics – Action Research of a Professional Development Program in an Icelandic Compulsory School

Authors: Osk Dagsdottir

Abstract:

Background—Gait classifying allows clinicians to differentiate gait patterns into clinically important categories that help in clinical decision making. Reliable comparison of gait data between normal and patients requires knowledge of the gait parameters of normal children's specific age group. However, there is still a lack of the gait database for normal children of different ages. Objectives—This study aims to investigate the kinematics of the lower limb joints during gait for normal children in different age groups. Methods—Fifty-three normal children (34 boys, 19 girls) were recruited in this study. All the children were aged between 5 to 16 years old. Age groups were defined as three types: young child aged (5-7), child (8-11), and adolescent (12-16). When a participant agreed to take part in the project, their parents signed a consent form. Vicon® motion capture system was used to collect gait data. Participants were asked to walk at their comfortable speed along a 10-meter walkway. Each participant walked up to 20 trials. Three good trials were analyzed using the Vicon Plug-in-Gait model to obtain parameters of the gait, e.g., walking speed, cadence, stride length, and joint parameters, e.g., joint angle, force, moments, etc. Moreover, each gait cycle was divided into 8 phases. The range of motion (ROM) angle of pelvis, hip, knee, and ankle joints in three planes of both limbs were calculated using an in-house program. Results—The temporal-spatial variables of three age groups of normal children were compared between each other; it was found that there was a significant difference (p < 0.05) between the groups. The step length and walking speed were gradually increasing from young child to adolescent, while cadence was gradually decreasing from young child to adolescent group. The mean and standard deviation (SD) of the step length of young child, child and adolescent groups were 0.502 ± 0.067 m, 0.566 ± 0.061 m and 0.672 ± 0.053 m, respectively. The mean and SD of the cadence of the young child, child and adolescent groups were 140.11±15.79 step/min, 129±11.84 step/min, and a 115.96±6.47 step/min, respectively. Moreover, it was observed that there were significant differences in kinematic parameters, either whole gait cycle or each phase. For example, RoM of knee angle in the sagittal plane in the whole cycle of young child group is (65.03±0.52 deg) larger than child group (63.47±0.47 deg). Conclusion—Our result showed that there are significant differences between each age group in the gait phases and thus children walking performance changes with ages. Therefore, it is important for the clinician to consider the age group when analyzing the patients with lower limb disorders before any clinical treatment.

Keywords: action research, creative learning, mathematics education, professional development

Procedia PDF Downloads 89
3011 Reflecting and Teaching on the Dialectical Nature of Social Work

Authors: Eli Buchbinder

Abstract:

Dialectics theory perceives two or more forces or themes as mutually opposed and negating on the one hand and as interdependent for their definition, existence, and resolution on the other. Such opposites might never be fully reconciled but might, simultaneously, continue to produce a higher level of integration and synthesis as well as tension, contradictions, and paradoxes. The identity of social work is constructed by poles; an understanding that emerges through key concepts that shape the profession. The key concept of person-in-environment creates dialectical tensions between the psychological versus the social pole. Important examples that reflect this focus on the psychological versus the social nature of human beings. This meta-perspective influences and constructs the implementation of values, ways of intervention, and professional relationships, e.g., creating a conflict between personal/social empowerment and social control and correction as the aims of the profession. Social work is dynamic and changing, with a unique way of perceiving and conceptualizing human behavior. Social workers must be able to face and accept the contradicting elements inherent in practicing social work. The basic philosophy for social work education is a dialectic conceptualization. In light of the above, social work students require dialectics as a critical mode of perception, reflection, and intervention. In the presentation, the focus will be on reflection on teaching students to conceptualize dialectics as a frame when training to be social workers. It is believed that the focus should emphasis two points: 1) the need to assist students to identify poles and to analyze the interrelationships created between them while coping emotionally with the tension and difficulties involved in containing these poles; 2) teaching students to integrate poles as a basis for assessment, planning, and intervention.

Keywords: professional ontology, a generic social work education, skills and values of social work, reflecting on social work teaching methods

Procedia PDF Downloads 61
3010 Therapeutic Efficacy and Safety Profile of Tolvaptan Administered in Hyponatremia Patients

Authors: Sree Vennela P., V. Samyuktha Bhardwaj

Abstract:

Hyponatremia is an electrolyte disturbance in which the sodium ion concentration in the serum is lower than normal. Sodium is the dominant extracellular cation (positive ion) and cannot freely cross from the interstitial space through the cell membrane, into the cell. Its homeostasis (stability of concentration) inside the cell is vital to the normal function of any cell. Normal serum sodium levels are between 135 and 145 mEq/L. Hyponatremia is defined as a serum level of less than 135 mEq/L and is considered severe when the serum level is below 125 mEq/L. In the vast majority of cases, Hyponatremia occurs as a result of excess body water diluting the serum sodium (salt level in the blood). Hyponatremia is often a complication of other medical illnesses in which excess water accumulates in the body at a higher rate than can be excreted (for example in congestive heart failure, syndrome of inappropriate antidiuretic hormone, SIADH, or polydipsia). Sometimes it may be a result of over-hydration (drinking too much water).Lack of sodium (salt) is very rarely the cause of Hyponatremia, although it can promote Hyponatremia indirectly. In particular, sodium loss can lead to a state of volume depletion (loss of blood volume in the body), with volume depletion serving as a signal for the release of ADH (anti-diuretic hormone). As a result of ADH-stimulated water retention (too much water in the body), blood sodium becomes diluted and Hyponatremia results.

Keywords: Tolvaptan, hyponatremia, syndrome of insufficient anti diuretic hormone (SIADH), euvolemic hyponatremia

Procedia PDF Downloads 242
3009 Laboratory-Based Monitoring of Hepatitis B Virus Vaccination Status in North Central Nigeria

Authors: Nwadioha Samuel Iheanacho, Abah Paul, Odimayo Simidele Michael

Abstract:

Background: The World Health Assembly through the Global Health Sector Strategy on viral hepatitis calls for the elimination of viral hepatitis as a public health threat by 2030. All hands are on deck to actualize this goal through an effective and active vaccination and monitoring tool. Aim: To combine the Epidemiologic with Laboratory Hepatitis B Virus vaccination monitoring tools. Method: Laboratory results analysis of subjects recruited during the World Hepatitis week from July 2020 to July 2021 was done after obtaining their epidemiologic data on Hepatitis B virus risk factors, in the Medical Microbiology Laboratory of Benue State University Teaching Hospital, Nigeria. Result: A total of 500 subjects comprising males 60.0%(n=300/500) and females 40.0%(n=200/500) were recruited. A fifty-three percent majority was of the age range of 26 to 36 years. Serologic profiles were as follows, 15.0%(n=75/500) HBsAg; 7.0% (n=35/500) HBeAg; 8.0% (n=40/500) Anti-Hbe; 20.0% (n=100/500) Anti-HBc and 38.0% (n=190/500) Anti-HBs. Immune responses to vaccination were as follows, 47.0%(n=235/500) Immune naïve {no serologic marker + normal ALT}; 33%(n=165/500) Immunity by vaccination {Anti-HBs + normal ALT}; 5%(n=25/500) Immunity to previous infection {Anti-HBs, Anti-HBc, +/- Anti-HBe + normal ALT}; 8%(n=40/500) Carriers {HBsAg, Anti-HBc, Anti-HBe +normal ALT} and 7% (35/500) Anti-HBe serum- negative infections {HBsAg, HBeAg, Anti-HBc +elevated ALT}. Conclusion: The present 33.0% immunity by vaccination coverage in Central Nigeria was much lower than the 41.0% national peak in 2013, and a far cry from the global expectation of attainment of a Global Health Sector Strategy on the elimination of viral hepatitis as a public health threat by 2030. Therefore, more creative ideas and collective effort are needed to attain this goal of the World Health Assembly.

Keywords: Hepatitis B, vaccination status, laboratory tools, resource-limited settings

Procedia PDF Downloads 48
3008 Supplier Selection by Bi-Objectives Mixed Integer Program Approach

Authors: K.-H. Yang

Abstract:

In the past, there was a lot of excellent research studies conducted on topics related to supplier selection. Because the considered factors of supplier selection are complicated and difficult to be quantified, most researchers deal supplier selection issues by qualitative approaches. Compared to qualitative approaches, quantitative approaches are less applicable in the real world. This study tried to apply the quantitative approach to study a supplier selection problem with considering operation cost and delivery reliability. By those factors, this study applies Normalized Normal Constraint Method to solve the dual objectives mixed integer program of the supplier selection problem.

Keywords: bi-objectives MIP, normalized normal constraint method, supplier selection, quantitative approach

Procedia PDF Downloads 386
3007 Case Study Hyperbaric Oxygen Therapy for Idiopathic Sudden Sensorineural Hearing Loss

Authors: Magdy I. A. Alshourbagi

Abstract:

Background: The National Institute for Deafness and Communication Disorders defines idiopathic sudden sensorineural hearing loss as the idiopathic loss of hearing of at least 30 dB across 3 contiguous frequencies occurring within 3 days.The most common clinical presentation involves an individual experiencing a sudden unilateral hearing loss, tinnitus, a sensation of aural fullness and vertigo. The etiologies and pathologies of ISSNHL remain unclear. Several pathophysiological mechanisms have been described including: vascular occlusion, viral infections, labyrinthine membrane breaks, immune associated disease, abnormal cochlear stress response, trauma, abnormal tissue growth, toxins, ototoxic drugs and cochlear membrane damage. The rationale for the use of hyperbaric oxygen to treat ISSHL is supported by an understanding of the high metabolism and paucity of vascularity to the cochlea. The cochlea and the structures within it require a high oxygen supply. The direct vascular supply, particularly to the organ of Corti, is minimal. Tissue oxygenation to the structures within the cochlea occurs via oxygen diffusion from cochlear capillary networks into the perilymph and the cortilymph. . The perilymph is the primary oxygen source for these intracochlear structures. Unfortunately, perilymph oxygen tension is decreased significantly in patients with ISSHL. To achieve a consistent rise of perilymph oxygen content, the arterial-perilymphatic oxygen concentration difference must be extremely high. This can be restored with hyperbaric oxygen therapy. Subject and Methods: A 37 year old man was presented at the clinic with a five days history of muffled hearing and tinnitus of the right ear. Symptoms were sudden onset, with no associated pain, dizziness or otorrhea and no past history of hearing problems or medical illness. Family history was negative. Physical examination was normal. Otologic examination revealed normal tympanic membranes bilaterally, with no evidence of cerumen or middle ear effusion. Tuning fork examination showed positive Rinne test bilaterally but with lateralization of Weber test to the left side, indicating right ear sensorineural hearing loss. Audiometric analysis confirmed sensorineural hearing loss across all frequencies of about 70- dB in the right ear. Routine lab work were all within normal limits. Clinical diagnosis of idiopathic sudden sensorineural hearing loss of the right ear was made and the patient began a medical treatment (corticosteroid, vasodilator and HBO therapy). The recommended treatment profile consists of 100% O2 at 2.5 atmospheres absolute for 60 minutes daily (six days per week) for 40 treatments .The optimal number of HBOT treatments will vary, depending on the severity and duration of symptomatology and the response to treatment. Results: As HBOT is not yet a standard for idiopathic sudden sensorineural hearing loss, it was introduced to this patient as an adjuvant therapy. The HBOT program was scheduled for 40 sessions, we used a 12-seat multi place chamber for the HBOT, which was started at day seven after the hearing loss onset. After the tenth session of HBOT, improvement of both hearing (by audiogram) and tinnitus was obtained in the affected ear (right). Conclusions: In conclusion, HBOT may be used for idiopathic sudden sensorineural hearing loss as an adjuvant therapy. It may promote oxygenation to the inner ear apparatus and revive hearing ability. Patients who fail to respond to oral and intratympanic steroids may benefit from this treatment. Further investigation is warranted, including animal studies to understand the molecular and histopathological aspects of HBOT and randomized control clinical studies.

Keywords: idiopathic sudden sensorineural hearing loss (issnhl), hyperbaric oxygen therapy (hbot), the decibel (db), oxygen (o2)

Procedia PDF Downloads 408
3006 Diminishing Constitutional Hyper-Rigidity by Means of Digital Technologies: A Case Study on E-Consultations in Canada

Authors: Amy Buckley

Abstract:

The purpose of this article is to assess the problem of constitutional hyper-rigidity to consider how it and the associated tensions with democratic constitutionalism can be diminished by means of using digital democratic technologies. In other words, this article examines how digital technologies can assist us in ensuring fidelity to the will of the constituent power without paying the price of hyper-rigidity. In doing so, it is impossible to ignore that digital strategies can also harm democracy through, for example, manipulation, hacking, ‘fake news,’ and the like. This article considers the tension between constitutional hyper-rigidity and democratic constitutionalism and the relevant strengths and weaknesses of digital democratic strategies before undertaking a case study on Canadian e-consultations and drawing its conclusions. This article observes democratic constitutionalism through the lens of the theory of deliberative democracy to suggest that the application of digital strategies can, notwithstanding their pitfalls, improve a constituency’s amendment culture and, thus, diminish constitutional hyper-rigidity. Constitutional hyper-rigidity is not a new or underexplored concept. At a high level, a constitution can be said to be ‘hyper-rigid’ when its formal amendment procedure is so difficult to enact that it does not take place or is limited in its application. This article claims that hyper-rigidity is one problem with ordinary constitutionalism that fails to satisfy the principled requirements of democratic constitutionalism. Given the rise and development of technology that has taken place since the Digital Revolution, there has been a significant expansion in the possibility for digital democratic strategies to overcome the democratic constitutionalism failures resulting from constitutional hyper-rigidity. Typically, these strategies have included, inter alia, e- consultations, e-voting systems, and online polling forums, all of which significantly improve the ability of politicians and judges to directly obtain the opinion of constituents on any number of matters. This article expands on the application of these strategies through its Canadian e-consultation case study and presents them as a solution to poor amendment culture and, consequently, constitutional hyper-rigidity. Hyper-rigidity is a common descriptor of many written and unwritten constitutions, including the United States, Australian, and Canadian constitutions as just some examples. This article undertakes a case study on Canada, in particular, as it is a jurisdiction less commonly cited in academic literature generally concerned with hyper-rigidity and because Canada has to some extent, championed the use of e-consultations. In Part I of this article, I identify the problem, being that the consequence of constitutional hyper-rigidity is in tension with the principles of democratic constitutionalism. In Part II, I identify and explore a potential solution, the implementation of digital democratic strategies as a means of reducing constitutional hyper-rigidity. In Part III, I explore Canada’s e-consultations as a case study for assessing whether digital democratic strategies do, in fact, improve a constituency’s amendment culture thus reducing constitutional hyper-rigidity and the associated tension that arises with the principles of democratic constitutionalism. The idea is to run a case study and then assess whether I can generalise the conclusions.

Keywords: constitutional hyper-rigidity, digital democracy, deliberative democracy, democratic constitutionalism

Procedia PDF Downloads 49
3005 Gaze Behaviour of Individuals with and without Intellectual Disability for Nonaccidental and Metric Shape Properties

Authors: S. Haider, B. Bhushan

Abstract:

Eye Gaze behaviour of individuals with and without intellectual disability are investigated in an eye tracking study in terms of sensitivity to Nonaccidental (NAPs) and Metric (MPs) shape properties. Total fixation time is used as an indirect measure of attention allocation. Studies have found Mean reaction times for non accidental properties (NAPs) to be shorter than for metric (MPs) when the MP and NAP differences were equalized. METHODS: Twenty-five individuals with intellectual disability (mild and moderate level of Mental Retardation) and twenty-seven normal individuals were compared on mean total fixation duration, accuracy level and mean reaction time for mild NAPs, extreme NAPs and metric properties of images. 2D images of cylinders were adapted and made into forced choice match-to-sample tasks. Tobii TX300 Eye Tracker was used to record total fixation duration and data obtained from the Areas of Interest (AOI). Variable trial duration (total reaction time of each participant) and fixed trail duration (data taken at each second from one to fifteen seconds) data were used for analyses. Both groups did not differ in terms of fixation times (fixed as well as variable) across any of the three image manipulations but differed in terms of reaction time and accuracy. Normal individuals had longer reaction time compared to individuals with intellectual disability across all types of images. Both the groups differed significantly on accuracy measure across all image types. Normal individuals performed better across all three types of images. Mild NAPs vs. Metric differences: There was significant difference between mild NAPs and metric properties of images in terms of reaction times. Mild NAPs images had significantly longer reaction time compared to metric for normal individuals but this difference was not found for individuals with intellectual disability. Mild NAPs images had significantly better accuracy level compared to metric for both the groups. In conclusion, type of image manipulations did not result in differences in attention allocation for individuals with and without intellectual disability. Mild Nonaccidental properties facilitate better accuracy level compared to metric in both the groups but this advantage is seen only for normal group in terms of mean reaction time.

Keywords: eye gaze fixations, eye movements, intellectual disability, stimulus properties

Procedia PDF Downloads 531
3004 Classification of Digital Chest Radiographs Using Image Processing Techniques to Aid in Diagnosis of Pulmonary Tuberculosis

Authors: A. J. S. P. Nileema, S. Kulatunga , S. H. Palihawadana

Abstract:

Computer aided detection (CAD) system was developed for the diagnosis of pulmonary tuberculosis using digital chest X-rays with MATLAB image processing techniques using a statistical approach. The study comprised of 200 digital chest radiographs collected from the National Hospital for Respiratory Diseases - Welisara, Sri Lanka. Pre-processing was done to remove identification details. Lung fields were segmented and then divided into four quadrants; right upper quadrant, left upper quadrant, right lower quadrant, and left lower quadrant using the image processing techniques in MATLAB. Contrast, correlation, homogeneity, energy, entropy, and maximum probability texture features were extracted using the gray level co-occurrence matrix method. Descriptive statistics and normal distribution analysis were performed using SPSS. Depending on the radiologists’ interpretation, chest radiographs were classified manually into PTB - positive (PTBP) and PTB - negative (PTBN) classes. Features with standard normal distribution were analyzed using an independent sample T-test for PTBP and PTBN chest radiographs. Among the six features tested, contrast, correlation, energy, entropy, and maximum probability features showed a statistically significant difference between the two classes at 95% confidence interval; therefore, could be used in the classification of chest radiograph for PTB diagnosis. With the resulting value ranges of the five texture features with normal distribution, a classification algorithm was then defined to recognize and classify the quadrant images; if the texture feature values of the quadrant image being tested falls within the defined region, it will be identified as a PTBP – abnormal quadrant and will be labeled as ‘Abnormal’ in red color with its border being highlighted in red color whereas if the texture feature values of the quadrant image being tested falls outside of the defined value range, it will be identified as PTBN–normal and labeled as ‘Normal’ in blue color but there will be no changes to the image outline. The developed classification algorithm has shown a high sensitivity of 92% which makes it an efficient CAD system and with a modest specificity of 70%.

Keywords: chest radiographs, computer aided detection, image processing, pulmonary tuberculosis

Procedia PDF Downloads 95
3003 High Pressure Thermophysical Properties of Complex Mixtures Relevant to Liquefied Natural Gas (LNG) Processing

Authors: Saif Al Ghafri, Thomas Hughes, Armand Karimi, Kumarini Seneviratne, Jordan Oakley, Michael Johns, Eric F. May

Abstract:

Knowledge of the thermophysical properties of complex mixtures at extreme conditions of pressure and temperature have always been essential to the Liquefied Natural Gas (LNG) industry’s evolution because of the tremendous technical challenges present at all stages in the supply chain from production to liquefaction to transport. Each stage is designed using predictions of the mixture’s properties, such as density, viscosity, surface tension, heat capacity and phase behaviour as a function of temperature, pressure, and composition. Unfortunately, currently available models lead to equipment over-designs of 15% or more. To achieve better designs that work more effectively and/or over a wider range of conditions, new fundamental property data are essential, both to resolve discrepancies in our current predictive capabilities and to extend them to the higher-pressure conditions characteristic of many new gas fields. Furthermore, innovative experimental techniques are required to measure different thermophysical properties at high pressures and over a wide range of temperatures, including near the mixture’s critical points where gas and liquid become indistinguishable and most existing predictive fluid property models used breakdown. In this work, we present a wide range of experimental measurements made for different binary and ternary mixtures relevant to LNG processing, with a particular focus on viscosity, surface tension, heat capacity, bubble-points and density. For this purpose, customized and specialized apparatus were designed and validated over the temperature range (200 to 423) K at pressures to 35 MPa. The mixtures studied were (CH4 + C3H8), (CH4 + C3H8 + CO2) and (CH4 + C3H8 + C7H16); in the last of these the heptane contents was up to 10 mol %. Viscosity was measured using a vibrating wire apparatus, while mixture densities were obtained by means of a high-pressure magnetic-suspension densimeter and an isochoric cell apparatus; the latter was also used to determine bubble-points. Surface tensions were measured using the capillary rise method in a visual cell, which also enabled the location of the mixture critical point to be determined from observations of critical opalescence. Mixture heat capacities were measured using a customised high-pressure differential scanning calorimeter (DSC). The combined standard relative uncertainties were less than 0.3% for density, 2% for viscosity, 3% for heat capacity and 3 % for surface tension. The extensive experimental data gathered in this work were compared with a variety of different advanced engineering models frequently used for predicting thermophysical properties of mixtures relevant to LNG processing. In many cases the discrepancies between the predictions of different engineering models for these mixtures was large, and the high quality data allowed erroneous but often widely-used models to be identified. The data enable the development of new or improved models, to be implemented in process simulation software, so that the fluid properties needed for equipment and process design can be predicted reliably. This in turn will enable reduced capital and operational expenditure by the LNG industry. The current work also aided the community of scientists working to advance theoretical descriptions of fluid properties by allowing to identify deficiencies in theoretical descriptions and calculations.

Keywords: LNG, thermophysical, viscosity, density, surface tension, heat capacity, bubble points, models

Procedia PDF Downloads 250
3002 Motor Gear Fault Diagnosis by Measurement of Current, Noise and Vibration on AC Machine

Authors: Sun-Ki Hong, Ki-Seok Kim, Yong-Ho Jo

Abstract:

Lots of motors have been being used in industry. Therefore many researchers have studied about the failure diagnosis of motors. In this paper, the effect of measuring environment for diagnosis of gear fault connected to a motor shaft is studied. The fault diagnosis is executed through the comparison of normal gear and abnormal gear. The measured FFT data are compared with the normal data and analyzed for q-axis current, noise and vibration. For bad and good environment, the diagnosis results are compared. From these, it is shown that the bad measuring environment may not be able to detect exactly the motor gear fault. Therefore it is emphasized that the measuring environment should be carefully prepared.

Keywords: motor fault, diagnosis, FFT, vibration, noise, q-axis current, measuring environment

Procedia PDF Downloads 531
3001 Effect of Concrete Strength and Aspect Ratio on Strength and Ductility of Concrete Columns

Authors: Mohamed A. Shanan, Ashraf H. El-Zanaty, Kamal G. Metwally

Abstract:

This paper presents the effect of concrete compressive strength and rectangularity ratio on strength and ductility of normal and high strength reinforced concrete columns confined with transverse steel under axial compressive loading. Nineteen normal strength concrete rectangular columns with different variables tested in this research were used to study the effect of concrete compressive strength and rectangularity ratio on strength and ductility of columns. The paper also presents a nonlinear finite element analysis for these specimens and another twenty high strength concrete square columns tested by other researchers using ANSYS 15 finite element software. The results indicate that the axial force – axial strain relationship obtained from the analytical model using ANSYS are in good agreement with the experimental data. The comparison shows that the ANSYS is capable of modeling and predicting the actual nonlinear behavior of confined normal and high-strength concrete columns under concentric loading. The maximum applied load and the maximum strain have also been confirmed to be satisfactory. Depending on this agreement between the experimental and analytical results, a parametric numerical study was conducted by ANSYS 15 to clarify and evaluate the effect of each variable on strength and ductility of the columns.

Keywords: ANSYS, concrete compressive strength effect, ductility, rectangularity ratio, strength

Procedia PDF Downloads 484
3000 Adolescent and Adult Hip Dysplasia on Plain Radiographs. Analysis of Measurements and Attempt for Optimization of Diagnostic and Performance Approaches for Patients with Periacetabular Osteotomy (PAO).

Authors: Naum Simanovsky MD, Michael Zaidman MD, Vladimir Goldman MD.

Abstract:

105 plain AP radiographs of normal adult pelvises (210 hips) were evaluated. Different measurements of normal and dysplastic hip joints in 45 patients were analyzed. Attempt was made to establish reproducible, easy applicable in practice approach for evaluation and follow up of patients with hip dysplasia. The youngest of our patients was 11 years and the oldest was 47 years. Only one of our patients needed conversion to total hip replacement (THR) during ten years of follow-up. It was emphasized that selected set of measurements was built for purpose to serve, especially those who’s scheduled or undergone PAO. This approach was based on concept of acetabulum-femoral head complex and importance of reliable reference points of measurements. Comparative analysis of measured parameters between normal and dysplastic hips was performed. Among 10 selected parameters, we use already well established such as lateral center edge angle and head extrusion index, but to serve specific group of patients with PAO, new parameters were considered such as complex lateralization and complex proximal migration. By our opinion proposed approach is easy applicable in busy clinical practice, satisfactorily delineate hip pathology and give to surgeon who’s going to perform PAO guidelines in condensed form. It is also useful tools for postoperative follow up after PAO.

Keywords: periacetabular osteotomy, plain radiograph’s measurements, adolescents, adult

Procedia PDF Downloads 45
2999 Normal Hematopoietic Stem Cell and the Toxic Effect of Parthenolide

Authors: Alsulami H., Alghamdi N., Alasker A., Almohen N., Shome D.

Abstract:

Most conventional chemotherapeutic agents which are used for the treatment of cancers not only eradicate cancer cells but also affect normal hematopoietic Stem cells (HSCs) that leads to severe pancytopenia during treatment. Therefore, a need exists for novel approaches to treat cancer without or with minimum effect on normal HSCs. Parthenolide (PTL), a herbal product occurring naturally in the plant Feverfew, is a potential new chemotherapeutic agent for the treatment of many cancers such as acute myeloid leukemia (AML) and chronic lymphocytic leukemia (CLL). In this study we investigated the effect of different PTL concentrations on the viability of normal HSCs and also on the ability of these cells to form colonies after they have been treated with PTL in vitro. Methods: In this study, 24 samples of bone marrow and cord blood were collected with consent, and mononuclear cells were separated using density gradient separation. These cells were then exposed to various concentrations of PTL for 24 hours. Cell viability after culture was determined using 7ADD in a flow cytometry test. Additionally, the impact of PTL on hematopoietic stem cells (HSCs) was evaluated using a colony forming unit assay (CFU). Furthermore, the levels of NFҝB expression were assessed by using a PE-labelled anti-pNFκBP65 antibody. Results: this study showed that there was no statistically significant difference in the percentage of cell death between untreated and PTL treated cells with 5 μM PTL (p = 0.7), 10 μM PTL (p = 0.4) and 25 μM (p = 0.09) respectively. However, at higher doses, PTL caused significant increase in the percentage of cell death. These results were significant when compared to untreated control (p < 0.001). The response of cord blood cells (n=4) on the other hand was slightly different from that for bone marrow cells in that the percentage of cell death was significant at 100 μM PTL. Therefore, cord blood cells seemed more resistant than bone marrow cells. Discussion &Conclusion: At concentrations ≤25 μM PTL has a minimum or no effect on HSCs in vitro. Cord blood HSCs are more resistant to PTL compared to bone marrow HSCs. This could be due to the higher percentage of T-lymphocytes, which are resistant to PTL, in CB samples (85% in CB vs. 56% in BM. Additionally, CB samples contained a higher proportion of CD34+ cells, with 14.5% of brightly CD34+ cells compared to only 1% in normal BM. These bright CD34+ cells in CB were mostly negative for early-stage stem cell maturation antigens, making them young and resilient to oxidative stress and high concentrations of PTL.

Keywords: stem cell, parthenolide, NFKB, CLL

Procedia PDF Downloads 16
2998 Correlation between Dynamic Knee Valgus with Isometric Hip External Rotators Strength during Single Leg Landing

Authors: Ahmed Fawzy, Khaled Ayad, Gh. M. Koura, W. Reda

Abstract:

The excessive frontal plane motion of the lower extremity during sports activities is thought to be a contributing factor to many traumatic and overuse injuries of the knee joint, little is known about the biomechanical factors that contribute to this loading pattern. Objectives: The purpose of this study was to investigate if there is a relationship between hip external rotators isometric strength and the value of frontal plane projection angle (FPPA) during single leg landing tasks in normal male subjects. Methods: One hundred (male) subjects free from lower extremity injuries for at least six months ago participated in this study. Their mean age was (23.25 ± 2.88) years, mean weight was (74.76 ± 13.54) (Kg), mean height was (174.23 ± 6.56) (Cm). The knee frontal plane projection angle was measured by digital video camera using single leg landing task. Hip external rotators isometric strength were assessed by portable hand held dynamometer. Muscle strength had been normalized to the body weight to obtain more accurate measurements. Results: The results demonstrated that there was no significant relationship between hip external rotators isometric strength and the value of FPPA during single leg landing tasks in normal male subjects. Conclusion: It can be concluded that there is no relationship between hip external rotators isometric strength and the value of FPPA during functional activities in normal male subjects.

Keywords: 2-dimensional motion analysis, hip strength, kinematics, knee injuries

Procedia PDF Downloads 205
2997 A U-shaped Relationship between Body Mass Index and Dysmenorrhea: A Longitudinal Study

Authors: H. Ju, M. Jones, G. D. Mishra

Abstract:

Introduction: Limited longitudinal studies have examined the relationship between BMI and dysmenorrhea, resulting in mixed results. This study aims to investigate the long-term association between BMI and dysmenorrhea. Methods: 9,688 women from Australian Longitudinal Study on Women’s Health (ALSWH), a prospective population-based cohort study, were followed for 13 years. Data were collected through self-reported questionnaires repeatedly on all variables, including dysmenorrhea, weight and height. The longitudinal association between dysmenorrhea and BMI or BMI transition (change of BMI categories between two successive surveys) was investigated by generalized estimating equations. Results: When the women were aged 22 to 27 years, approximately 11% were obese, 7% underweight, and 25% reported dysmenorrhea. Over the study period, the prevalence of obesity doubled whereas that of underweight declined substantially. The prevalence of dysmenorrhea remained relatively stable. Compared to women with a normal weight, significantly higher odds of reporting dysmenorrhea were detected for both women who were underweight (odds ratio (OR) 1.25, 95% confidence interval (CI) 1.09, 1.43) and obese (OR 1.20, 95% CI 1.10, 1.31). Being overweight was not associated with increased risk of dysmenorrhea. Compared to women who remained at normal weight or overweight over time, significant risk was detected for women who: remained underweight or obese (OR 1.35, 95% CI 1.23, 1.49), were underweight but became normal or overweight (OR 1.29, 95% CI 1.11, 1.50), became underweight (OR 1.24, 95% CI 1.01, 1.52). However, the higher risk among obese women disappeared when they lost weight and became normal weight or overweight (OR 1.07, 95% CI 0.87, 1.30). Conclusions: A U-shaped association was revealed between dysmenorrhea and BMI, revealing higher risk of dysmenorrhea for both underweight and obese women. Further, the risk disappeared when obese women lost weight and acquired a healthier BMI. However obesity certainly poses a greater burden of disease from the public health perspective, thus requires greater effort to tackle the increasing problem at the population level. It is important to maintain a healthy weight over time for women to enjoy a better reproductive health.

Keywords: body mass index, dysmenorrhea, obesity, painful period, underweight

Procedia PDF Downloads 303
2996 Selective Attention as a Search for the Deceased during the Mourning Process

Authors: Sonia Sirtoli Färber

Abstract:

Objective: This study aims to investigate selective attention in the process of mourning, as a normal reaction to loss. Method: In order to develop this research, we used a systematic bibliographic review, following the process of investigation, cataloging, careful evaluation and synthesis of the documentation, associated with the method of thanatological hemenutics proposed by Elisabeth Kübler-Ross. Conclusion: After a significant loss, especially the death of a loved one or family member, it is normal for the mourner, motivated by absence, to have a false perception of the presence of the deceased. This phenomenon happens whenever the mourner is in the middle of the crowd, because his selective attention causes him to perceive physical characteristics, tone of voice, or feel fragrance of the perfume that the deceased possessed. Details characterizing the dead are perceived by the mourner because he seeks the presence in the absence.

Keywords: Elisabeth Kübler-Ross, mourning, selective attention, thanatology

Procedia PDF Downloads 390
2995 Pilot Program for the Promotion of Normal Childbirth in the North, Northeast and Midwest of Brazil

Authors: Natália Bruno Chaves, Richardes Caúla, Roosevelt do Vale, Daniela Toneti, Rafaela Carvalho, Renata Silva Lopes, Antônio Carlos Júnior, Adner Nobre, Viviane Santiago, Yara Alana Caldato, Estefania Rodriguez Urrego, André Buarque Lemos, Catarina Nucci Stetner, Marcos Mauro Barreto, Stefany Moreira Lima, Mara Cavalcante, Ticiane Ribeiro

Abstract:

The Well Born (Nascer Bem – in Portuguese) Program was created in the Hapvida health network with the aim of improving access to safe and quality prenatal care for users. In addition to offering a line of prenatal care, the inclusion of obstetric nursing and the decentralization of childbirth, bring security that professionals did not indicate the route of delivery for professional convenience. The introduction of the nursing consultation came to reinforce the care to our users, strengthening their bond and reception. In 2021, the program maintained an average of 40% of normal births in the north, northeast and central-west regions of Brazil, an average above that observed in the rest of the country's private health systems, around 20%. In addition, the neonatal hospitalization rate of this population remained around 5.1%, a figure below the national average. With these data, the “Nascer Bem” program is affirmed as a safe and effective strategy for the promotion of safe normal birth.

Keywords: quality, safe, prenatal, obstetric nursing

Procedia PDF Downloads 97
2994 Improved Imaging and Tracking Algorithm for Maneuvering Extended UAVs Using High-Resolution ISAR Radar System

Authors: Mohamed Barbary, Mohamed H. Abd El-Azeem

Abstract:

Maneuvering extended object tracking (M-EOT) using high-resolution inverse synthetic aperture radar (ISAR) observations has been gaining momentum recently. This work presents a new robust implementation of the multiple models (MM) multi-Bernoulli (MB) filter for M-EOT, where the M-EOT’s ISAR observations are characterized using a skewed (SK) non-symmetrically normal distribution. To cope with the possible abrupt change of kinematic state, extension, and observation distribution over an extended object when a target maneuvers, a multiple model technique is represented based on MB-track-before-detect (TBD) filter supported by SK-sub-random matrix model (RMM) or sub-ellipses framework. Simulation results demonstrate this remarkable impact.

Keywords: maneuvering extended objects, ISAR, skewed normal distribution, sub-RMM, MM-MB-TBD filter

Procedia PDF Downloads 49
2993 Vestibular Dysfunction in Post-Acute Sequelae of SARS-CoV-2 Infection: A Gait Analysis Pilot Study

Authors: Adar Pelah, Avraham Adelman, Amanda Balash, Jake Mitchell, Mattan J. Pelah, Viswadeep Sarangi, Xin Chen Cai, Zadok Storkey, Gregg B. Fields, Ximena Levy, Ali A. Danesh

Abstract:

Introduction: Post-Acute Sequelae of Severe Acute Respiratory Syndrome Coronavirus 2 infection (PASC), or Long COVID, while primarily a respiratory disorder, can also include dizziness, lasting weeks to months in individuals who had previously tested positive for COVID-19. This study utilized gait analysis to assess the potential vestibular effects of PASC on the presentation of gait anomalies. Materials and Methods: The study included 11 participants who tested positive for COVID-19, a mean of 2.8 months prior to gait testing (PP=11), and 8 control participants who did not test positive for COVID-19 (NP=8). Participants walked 7.5m at three self-selected speeds: ‘slow,’ ‘normal,’ and ‘fast.’ Mean walking speeds were determined for each speed and overall range from four laps on an instrumented walkway using video capture. Results: A Z-test at 0.05 significance was used for speed range, ‘normal’ and ‘fast’ at the lower tail, and for ‘slow’ at the higher tail. Average speeds (m/s) were: ‘slow’ (PP=0.709, NP=0.678), ‘normal’ (PP=1.141, NP=1.170), ‘fast’ (PP=1.529, NP=1.821), average range (PP=0.846, NP=1.143). Significant speed decreases between PP and NP were observed in ‘fast’ (-17.43%) and average range (-29.86%), while changes in ‘slow’ (+2.44%) and ‘normal’ (-4.39%) speeds were not significant. Conclusions: Long COVID is a recognized disability (Americans with Disabilities Act), and although it presents variably, dizziness, vertigo, and tinnitus are not uncommon in COVID-19 infection. These results suggest that potential inner-ear damage may persist and manifest in gait changes even after recovery from acute illness. Further research with a larger sample size may indicate the need for providers to consider PASC when diagnosing patients with vestibular dysfunction.

Keywords: gait analysis, long-COVID, vestibular dysfunction, walking speed

Procedia PDF Downloads 96
2992 Virtual Dimension Analysis of Hyperspectral Imaging to Characterize a Mining Sample

Authors: L. Chevez, A. Apaza, J. Rodriguez, R. Puga, H. Loro, Juan Z. Davalos

Abstract:

Virtual Dimension (VD) procedure is used to analyze Hyperspectral Image (HIS) treatment-data in order to estimate the abundance of mineral components of a mining sample. Hyperspectral images coming from reflectance spectra (NIR region) are pre-treated using Standard Normal Variance (SNV) and Minimum Noise Fraction (MNF) methodologies. The endmember components are identified by the Simplex Growing Algorithm (SVG) and after adjusted to the reflectance spectra of reference-databases using Simulated Annealing (SA) methodology. The obtained abundance of minerals of the sample studied is very near to the ones obtained using XRD with a total relative error of 2%.

Keywords: hyperspectral imaging, minimum noise fraction, MNF, simplex growing algorithm, SGA, standard normal variance, SNV, virtual dimension, XRD

Procedia PDF Downloads 126
2991 Oxidative Status and Some Serum Macro Minerals during Estrus, Anestrous and Repeat Breeding in Cholistani Cattle

Authors: Farah Ali, Laeeq Akbar Lodhi, Riaz Hussain, Muhammad Sufyan

Abstract:

The present study was conducted to determine the macro mineral profile and biomarkers of oxidative stress in Cholistani cattle kept at a public farm and various villages in district Bahawalpur. For this purpose 90 blood samples were collected each from estrual, anestrous and repeat breeding cattle having different age and lactation number. Reproductive tract examination of all the cattle was carried out to determine the reproductive status. Blood samples without EDTA were collected for serum separation at day of estrus (normal cyclic), repeat breeder and anestrous cows. The serum calcium levels were significantly decreased (P<0.05) in anestrous (7.31±0.02 mg/dl) cattle as compared to estrus. However, these values were non-significantly different between repeat breeder and cattle having estrus phase. The concentrations of serum phosphorus were significantly higher (P<0.01) in normal estrual (4.99±0.08 mg/dl) as compared torepeat breeder (3.90±0.06 mg/dl) and anestrous (3.82±0.04 mg/dl) Cholistani cattle. Mean serum MDA (nmol/ml) levels of repeat breeder (2.68±0.18) and anestrous (2.54±0.22) were significantly(P<0.01) higher than the estrous (1.71±0.03) cattle. Moreover, the serum nitric oxide levels(µmol/L) were also increased significantly (P<0.01) in repeat breeder(58.28±4.01)and anestrous (61.40±9.40) than the normalestrous (31.67±6.71) cattle. The ratio of Ca: P in normal cyclic animals was lower (1.73:1) as compared to the anestrous animals (1.92:1). It can be concluded from the present study that the level of Ca: P should also be near to 1.5:1 for better reproductive performance.

Keywords: anestrus, cholistani cattle, minerals, oxidative stress, repeat breeder

Procedia PDF Downloads 573
2990 A Conundrum of Teachability and Learnability of Deaf Adult English as Second Language Learners in Pakistani Mainstream Classrooms: Integration or Elimination

Authors: Amnah Moghees, Saima Abbas Dar, Muniba Saeed

Abstract:

Teaching a second language to deaf learners has always been a challenge in Pakistan. Different approaches and strategies have been followed, but they have been resulted into partial or complete failure. The study aims to investigate the language problems faced by adult deaf learners of English as second language in mainstream classrooms. Moreover, the study also determines the factors which are very much involved in language teaching and learning in mainstream classes. To investigate the language problems, data will be collected through writing samples of ten deaf adult learners and ten normal ESL learners of the same class; whereas, observation in inclusive language teaching classrooms and interviews from five ESL teachers in inclusive classes will be conducted to know the factors which are directly or indirectly involved in inclusive language education. Keeping in view this study, qualitative research paradigm will be applied to analyse the corpus. The study figures out that deaf ESL learners face severe language issues such as; odd sentence structures, subject and verb agreement violation, misappropriation of verb forms and tenses as compared to normal ESL learners. The study also predicts that in mainstream classrooms there are multiple factors which are affecting the smoothness of teaching and learning procedure; role of mediator, level of deaf learners, empathy of normal learners towards deaf learners and language teacher’s training.

Keywords: deaf English language learner, empathy, mainstream classrooms, previous language knowledge of learners, role of mediator, language teachers' training

Procedia PDF Downloads 143
2989 Association Between Malnutrition and Dental Caries in Children

Authors: Mohammed Khalid Mahmood, Delphine Tardivo, Romain Lan

Abstract:

Dental caries is one of the most common diseases in the world, affecting billions of people and significantly lowering the quality of life. Malnutrition, on the other hand, is defined as inadequate, imbalanced, or excessive consumption of macronutrients, micronutrients, or both, which is characterized as an abnormal physiological condition. Oral health is impacted by malnutrition, and malnutrition can result from poor oral health. The objective of this paper was to study the association of serum Vitamin D level and body mass index as representatives of malnutrition at micro and macro levels, respectively, on dental caries. Results showed that: 1. The majority of the population studied (70%) are Vitamin D deficient. 2. Having a normal and even a sufficient level of serum Vitamin D and having a normal body mass index increase the chances of children being caries-free and having a lower caries index.

Keywords: children, dental Caries, malnutrition, vitamin D

Procedia PDF Downloads 54
2988 The Effect of Non-Normality on CB-SEM and PLS-SEM Path Estimates

Authors: Z. Jannoo, B. W. Yap, N. Auchoybur, M. A. Lazim

Abstract:

The two common approaches to Structural Equation Modeling (SEM) are the Covariance-Based SEM (CB-SEM) and Partial Least Squares SEM (PLS-SEM). There is much debate on the performance of CB-SEM and PLS-SEM for small sample size and when distributions are non-normal. This study evaluates the performance of CB-SEM and PLS-SEM under normality and non-normality conditions via a simulation. Monte Carlo Simulation in R programming language was employed to generate data based on the theoretical model with one endogenous and four exogenous variables. Each latent variable has three indicators. For normal distributions, CB-SEM estimates were found to be inaccurate for small sample size while PLS-SEM could produce the path estimates. Meanwhile, for a larger sample size, CB-SEM estimates have lower variability compared to PLS-SEM. Under non-normality, CB-SEM path estimates were inaccurate for small sample size. However, CB-SEM estimates are more accurate than those of PLS-SEM for sample size of 50 and above. The PLS-SEM estimates are not accurate unless sample size is very large.

Keywords: CB-SEM, Monte Carlo simulation, normality conditions, non-normality, PLS-SEM

Procedia PDF Downloads 378
2987 Airborne Pollutants and Lung Surfactant: Biophysical Impacts of Surface Oxidation Reactions

Authors: Sahana Selladurai, Christine DeWolf

Abstract:

Lung surfactant comprises a lipid-protein film that coats the alveolar surface and serves to prevent alveolar collapse upon repeated breathing cycles. Exposure of lung surfactant to high concentrations of airborne pollutants, for example tropospheric ozone in smog, can chemically modify the lipid and protein components. These chemical changes can impact the film functionality by decreasing the film’s collapse pressure (minimum surface tension attainable), altering it is mechanical and flow properties and modifying lipid reservoir formation essential for re-spreading of the film during the inhalation process. In this study, we use Langmuir monolayers spread at the air-water interface as model membranes where the compression and expansion of the film mimics the breathing cycle. The impact of ozone exposure on model lung surfactant films is measured using a Langmuir film balance, Brewster angle microscopy and a pendant drop tensiometer as a function of film and sub-phase composition. The oxidized films are analyzed using mass spectrometry where lipid and protein oxidation products are observed. Oxidation is shown to reduce surface activity, alter line tension (and film morphology) and in some cases visibly reduce the viscoelastic properties of the film when compared to controls. These reductions in functionality of the films are highly dependent on film and sub-phase composition, where for example, the effect of oxidation is more pronounced when using a physiologically relevant buffer as opposed to water as the sub-phase. These findings can lead to a better understanding on the impact of continuous exposure to high levels of ozone on the mechanical process of breathing, as well as understanding the roles of certain lung surfactant components in this process.

Keywords: lung surfactant, oxidation, ozone, viscoelasticity

Procedia PDF Downloads 290
2986 Classification of ECG Signal Based on Mixture of Linear and Non-Linear Features

Authors: Mohammad Karimi Moridani, Mohammad Abdi Zadeh, Zahra Shahiazar Mazraeh

Abstract:

In recent years, the use of intelligent systems in biomedical engineering has increased dramatically, especially in the diagnosis of various diseases. Also, due to the relatively simple recording of the electrocardiogram signal (ECG), this signal is a good tool to show the function of the heart and diseases associated with it. The aim of this paper is to design an intelligent system for automatically detecting a normal electrocardiogram signal from abnormal one. Using this diagnostic system, it is possible to identify a person's heart condition in a very short time and with high accuracy. The data used in this article are from the Physionet database, available in 2016 for use by researchers to provide the best method for detecting normal signals from abnormalities. Data is of both genders and the data recording time varies between several seconds to several minutes. All data is also labeled normal or abnormal. Due to the low positional accuracy and ECG signal time limit and the similarity of the signal in some diseases with the normal signal, the heart rate variability (HRV) signal was used. Measuring and analyzing the heart rate variability with time to evaluate the activity of the heart and differentiating different types of heart failure from one another is of interest to the experts. In the preprocessing stage, after noise cancelation by the adaptive Kalman filter and extracting the R wave by the Pan and Tampkinz algorithm, R-R intervals were extracted and the HRV signal was generated. In the process of processing this paper, a new idea was presented that, in addition to using the statistical characteristics of the signal to create a return map and extraction of nonlinear characteristics of the HRV signal due to the nonlinear nature of the signal. Finally, the artificial neural networks widely used in the field of ECG signal processing as well as distinctive features were used to classify the normal signals from abnormal ones. To evaluate the efficiency of proposed classifiers in this paper, the area under curve ROC was used. The results of the simulation in the MATLAB environment showed that the AUC of the MLP and SVM neural network was 0.893 and 0.947, respectively. As well as, the results of the proposed algorithm in this paper indicated that the more use of nonlinear characteristics in normal signal classification of the patient showed better performance. Today, research is aimed at quantitatively analyzing the linear and non-linear or descriptive and random nature of the heart rate variability signal, because it has been shown that the amount of these properties can be used to indicate the health status of the individual's heart. The study of nonlinear behavior and dynamics of the heart's neural control system in the short and long-term provides new information on how the cardiovascular system functions, and has led to the development of research in this field. Given that the ECG signal contains important information and is one of the common tools used by physicians to diagnose heart disease, but due to the limited accuracy of time and the fact that some information about this signal is hidden from the viewpoint of physicians, the design of the intelligent system proposed in this paper can help physicians with greater speed and accuracy in the diagnosis of normal and patient individuals and can be used as a complementary system in the treatment centers.

Keywords: neart rate variability, signal processing, linear and non-linear features, classification methods, ROC Curve

Procedia PDF Downloads 226
2985 Studies on Partial Replacement of Cement by Rice Husk Ash under Sodium Phosphate Medium

Authors: Dharmana Pradeep, Chandan Kumar Patnaikuni, N. V. S. Venugopal

Abstract:

Rice Husk Ash (RHA) is a green product contains carbon and also loaded with silica. For the development of durability and strength of any concrete, curing phenomenon shall be very important. In this communication, we reported the exposure of partial replacement of cement with RHA at different percentages of 0%, 5%, 7.5%, 10%, 12.5% and 15% by weight under sodium phosphate curing atmosphere. The mix is designed for M40 grade concrete with the proportions of 1:2.2:3.72. The tests conducted on concrete was a compressive strength, and the specimens were cured in normal water & exposed to the chemical solution for 7, 28 & 56 days. For chemical curing 0.5% & 1% concentrated sodium phosphates were used and were compared with normal concrete strength results. The strength of specimens of 1% sodium phosphate exposure showed that the compressive strength decreased with increase in RHA percentages.

Keywords: rice husk ash, compressive strength, sodium phosphate, curing

Procedia PDF Downloads 311
2984 Finding the Longest Common Subsequence in Normal DNA and Disease Affected Human DNA Using Self Organizing Map

Authors: G. Tamilpavai, C. Vishnuppriya

Abstract:

Bioinformatics is an active research area which combines biological matter as well as computer science research. The longest common subsequence (LCSS) is one of the major challenges in various bioinformatics applications. The computation of the LCSS plays a vital role in biomedicine and also it is an essential task in DNA sequence analysis in genetics. It includes wide range of disease diagnosing steps. The objective of this proposed system is to find the longest common subsequence which presents in a normal and various disease affected human DNA sequence using Self Organizing Map (SOM) and LCSS. The human DNA sequence is collected from National Center for Biotechnology Information (NCBI) database. Initially, the human DNA sequence is separated as k-mer using k-mer separation rule. Mean and median values are calculated from each separated k-mer. These calculated values are fed as input to the Self Organizing Map for the purpose of clustering. Then obtained clusters are given to the Longest Common Sub Sequence (LCSS) algorithm for finding common subsequence which presents in every clusters. It returns nx(n-1)/2 subsequence for each cluster where n is number of k-mer in a specific cluster. Experimental outcomes of this proposed system produce the possible number of longest common subsequence of normal and disease affected DNA data. Thus the proposed system will be a good initiative aid for finding disease causing sequence. Finally, performance analysis is carried out for different DNA sequences. The obtained values show that the retrieval of LCSS is done in a shorter time than the existing system.

Keywords: clustering, k-mers, longest common subsequence, SOM

Procedia PDF Downloads 235
2983 Robust Inference with a Skew T Distribution

Authors: M. Qamarul Islam, Ergun Dogan, Mehmet Yazici

Abstract:

There is a growing body of evidence that non-normal data is more prevalent in nature than the normal one. Examples can be quoted from, but not restricted to, the areas of Economics, Finance and Actuarial Science. The non-normality considered here is expressed in terms of fat-tailedness and asymmetry of the relevant distribution. In this study a skew t distribution that can be used to model a data that exhibit inherent non-normal behavior is considered. This distribution has tails fatter than a normal distribution and it also exhibits skewness. Although maximum likelihood estimates can be obtained by solving iteratively the likelihood equations that are non-linear in form, this can be problematic in terms of convergence and in many other respects as well. Therefore, it is preferred to use the method of modified maximum likelihood in which the likelihood estimates are derived by expressing the intractable non-linear likelihood equations in terms of standardized ordered variates and replacing the intractable terms by their linear approximations obtained from the first two terms of a Taylor series expansion about the quantiles of the distribution. These estimates, called modified maximum likelihood estimates, are obtained in closed form. Hence, they are easy to compute and to manipulate analytically. In fact the modified maximum likelihood estimates are equivalent to maximum likelihood estimates, asymptotically. Even in small samples the modified maximum likelihood estimates are found to be approximately the same as maximum likelihood estimates that are obtained iteratively. It is shown in this study that the modified maximum likelihood estimates are not only unbiased but substantially more efficient than the commonly used moment estimates or the least square estimates that are known to be biased and inefficient in such cases. Furthermore, in conventional regression analysis, it is assumed that the error terms are distributed normally and, hence, the well-known least square method is considered to be a suitable and preferred method for making the relevant statistical inferences. However, a number of empirical researches have shown that non-normal errors are more prevalent. Even transforming and/or filtering techniques may not produce normally distributed residuals. Here, a study is done for multiple linear regression models with random error having non-normal pattern. Through an extensive simulation it is shown that the modified maximum likelihood estimates of regression parameters are plausibly robust to the distributional assumptions and to various data anomalies as compared to the widely used least square estimates. Relevant tests of hypothesis are developed and are explored for desirable properties in terms of their size and power. The tests based upon modified maximum likelihood estimates are found to be substantially more powerful than the tests based upon least square estimates. Several examples are provided from the areas of Economics and Finance where such distributions are interpretable in terms of efficient market hypothesis with respect to asset pricing, portfolio selection, risk measurement and capital allocation, etc.

Keywords: least square estimates, linear regression, maximum likelihood estimates, modified maximum likelihood method, non-normality, robustness

Procedia PDF Downloads 383