Search results for: European Standard Classification
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8428

Search results for: European Standard Classification

6838 Geospatial Techniques and VHR Imagery Use for Identification and Classification of Slums in Gujrat City, Pakistan

Authors: Muhammad Ameer Nawaz Akram

Abstract:

The 21st century has been revealed that many individuals around the world are living in urban settlements than in rural zones. The evolution of numerous cities in emerging and newly developed countries is accompanied by the rise of slums. The precise definition of a slum varies countries to countries, but the universal harmony is that slums are dilapidated settlements facing severe poverty and have lacked access to sanitation, water, electricity, good living styles, and land tenure. The slum settlements always vary in unique patterns within and among the countries and cities. The core objective of this study is the spatial identification and classification of slums in Gujrat city Pakistan from very high-resolution GeoEye-1 (0.41m) satellite imagery. Slums were first identified using GPS for sample site identification and ground-truthing; through this process, 425 slums were identified. Then Object-Oriented Analysis (OOA) was applied to classify slums on digital image. Spatial analysis softwares, e.g., ArcGIS 10.3, Erdas Imagine 9.3, and Envi 5.1, were used for processing data and performing the analysis. Results show that OOA provides up to 90% accuracy for the identification of slums. Jalal Cheema and Allah Ho colonies are severely affected by slum settlements. The ratio of criminal activities is also higher here than in other areas. Slums are increasing with the passage of time in urban areas, and they will be like a hazardous problem in coming future. So now, the executive bodies need to make effective policies and move towards the amelioration process of the city.

Keywords: slums, GPS, satellite imagery, object oriented analysis, zonal change detection

Procedia PDF Downloads 130
6837 Composition, Velocity, and Mass of Projectiles Generated from a Chain Shot Event

Authors: Eric Shannon, Mark J. McGuire, John P. Parmigiani

Abstract:

A hazard associated with the use of timber harvesters is chain shot. Harvester saw chain is subjected to large dynamic mechanical stresses which can cause it to fracture. The resulting open loop of saw chain can fracture a second time and create a projectile consisting of several saw-chain links referred to as a chain shot. Its high kinetic energy enables it to penetrate operator enclosures and be a significant hazard. Accurate data on projectile composition, mass, and speed are needed for the design of both operator enclosures resistant to projectile penetration and for saw chain resistant to fracture. The work presented here contributes to providing this data through the use of a test machine designed and built at Oregon State University. The machine’s enclosure is a standard shipping container. To safely contain any anticipated chain shot, the container was lined with both 9.5 mm AR500 steel plates and 50 mm high-density polyethylene (HDPE). During normal operation, projectiles are captured virtually undamaged in the HDPE enabling subsequent analysis. Standard harvester components are used for bar mounting and chain tensioning. Standard guide bars and saw chains are used. An electric motor with flywheel drives the system. Testing procedures follow ISO Standard 11837. Chain speed at break was approximately 45.5 m/s. Data was collected using both a 75 cm solid bar (Oregon 752HSFB149) and 90 cm solid bar (Oregon 902HSFB149). Saw chains used were 89 Drive Link .404”-18HX loops made from factory spools. Standard 16-tooth sprockets were used. Projectile speed was measured using both a high-speed camera and a chronograph. Both rotational and translational kinetic energy are calculated. For this study 50 chain shot events were executed. Results showed that projectiles consisted of a variety combinations of drive links, tie straps, and cutter links. Most common (occurring in 60% of the events) was a drive-link / tie-strap / drive-link combination having a mass of approximately 10.33 g. Projectile mass varied from a minimum of 2.99 g corresponding to a drive link only to a maximum of 18.91 g corresponding to a drive-link / tie-strap / drive-link / cutter-link / drive-link combination. Projectile translational speed was measured to be approximately 270 m/s and rotational speed of approximately 14000 r/s. The calculated translational and rotational kinetic energy magnitudes each average over 600 J. This study provides useful information for both timber harvester manufacturers and saw chain manufacturers to design products that reduce the hazards associated with timber harvesting.

Keywords: chain shot, timber harvesters, safety, testing

Procedia PDF Downloads 140
6836 Computer Aided Diagnosis Bringing Changes in Breast Cancer Detection

Authors: Devadrita Dey Sarkar

Abstract:

Regardless of the many technologic advances in the past decade, increased training and experience, and the obvious benefits of uniform standards, the false-negative rate in screening mammography remains unacceptably high .A computer aided neural network classification of regions of suspicion (ROS) on digitized mammograms is presented in this abstract which employs features extracted by a new technique based on independent component analysis. CAD is a concept established by taking into account equally the roles of physicians and computers, whereas automated computer diagnosis is a concept based on computer algorithms only. With CAD, the performance by computers does not have to be comparable to or better than that by physicians, but needs to be complementary to that by physicians. In fact, a large number of CAD systems have been employed for assisting physicians in the early detection of breast cancers on mammograms. A CAD scheme that makes use of lateral breast images has the potential to improve the overall performance in the detection of breast lumps. Because breast lumps can be detected reliably by computer on lateral breast mammographs, radiologists’ accuracy in the detection of breast lumps would be improved by the use of CAD, and thus early diagnosis of breast cancer would become possible. In the future, many CAD schemes could be assembled as packages and implemented as a part of PACS. For example, the package for breast CAD may include the computerized detection of breast nodules, as well as the computerized classification of benign and malignant nodules. In order to assist in the differential diagnosis, it would be possible to search for and retrieve images (or lesions) with these CAD systems, which would be reliable and useful method for quantifying the similarity of a pair of images for visual comparison by radiologists.

Keywords: CAD(computer-aided design), lesions, neural network, ROS(region of suspicion)

Procedia PDF Downloads 453
6835 The Effectiveness of Warm-Water Footbath on Fatigue in Cancer Patient Undergoing Chemotherapy

Authors: Yu-Wen Lin, Li-Ni Liu

Abstract:

Introduction: Fatigue is the most common symptoms experienced by cancer patients undergoing chemotherapy. Patients receiving anticancer therapies develop a higher proportion of fatigue compared with patients who do not receive anticancer therapies. Fatigue has significant impacts on quality of life, daily activities, mood status, and social behaviors. A warm-water footbath (WWF) at 41℃ promotes circulation and removes metabolites resulting in improving sleep and relieving fatigue. The aim of this study is to determine the effectiveness of WWF for relieving fatigue with cancer patients undergoing chemotherapy. Materials and Methods: This is a single-center, prospective, quasi-experimental design study in the oncology ward in Taiwan. Participants in this study were assigned to WWF group as experimental group and standard care group as a control group by purposive sampling. In the WWF group, the participants were asked to soak their feet in 42-43℃ water 15 minutes for consecutive 6 days at one day before chemotherapy. Each participant was evaluated for fatigue level by the Taiwanese version of the Brief Fatigue Inventory (BFI-T). BFI-T was completed for consecutive 8 days of the study. The primary outcome was compared the BFI-T score of WWF group to the standard care group. Results: There were 60 participants enrolled in this study. Thirty participants were assigned to WWF group and 30 participants were assigned to standard care group. Both groups have comparable characteristic. The BFI-T scores of both groups were increased associated with the days of chemotherapy. The highest BFI-T scores of both groups were on the day 4 of chemotherapy. The BFI-T scores of both groups were decreased since day 5 and significantly decreased in WWF group on day 5 compared to standard care group (4.17 vs. 5.7, P < .05). At the end of the study the fatigue at its worse were significantly decreased in WWF group (2.33 vs. 4.37, P < .001). There was no adverse event reported in this study. Conclusion: WWF is an easy, safe, non-invasive, and relatively inexpensive nursing intervention for improving fatigue of cancer patients undergoing chemotherapy. In summary, this study shows the WWF is a simple complementary care method, and it is effective for improving and relieving fatigue in a short time. Through improving fatigue is a way to enhance the quality of life which is important for cancer patients undergoing chemotherapy. Larger prospective randomized controlled trial and long-term effectiveness and outcomes of WWF should be performed to confirm this study.

Keywords: chemotherapy, warm-water footbath, fatigue, Taiwanese version of the brief fatigue inventory

Procedia PDF Downloads 138
6834 The Determination of Operating Reserve in Small Power Systems Based on Reliability Criteria

Authors: H. Falsafi Falsafizadeh, R. Zeinali Zeinali

Abstract:

This paper focuses on determination of total Operating Reserve (OR) level, consisting of spinning and non-spinning reserves, in two small real power systems, in such a way that the system reliability indicator would comply with typical industry standards. For this purpose, the standard used by the North American Electric Reliability Corporation (NERC) – i.e., 1 day outage in 10 years or 0.1 days/year is relied. The simulation of system operation for these systems that was used for the determination of total operating reserve level was performed by industry standard production simulation software in this field, named PLEXOS. In this paper, the operating reserve which meets an annual Loss of Load Expectation (LOLE) of approximately 0.1 days per year is determined in the study year. This reserve is the minimum amount of reserve required in a power system and generally defined as a percentage of the annual peak.

Keywords: frequency control, LOLE, operating reserve, system reliability

Procedia PDF Downloads 338
6833 The Impact of FDI on Economic Growth in Algeria

Authors: Mohammed Yagoub

Abstract:

The new orientation to the market economy sponsored by the Algeria government in the early Nineties of the last century, and its desire to develop investment mechanisms and the promotion of development recently, the access into a partnership with the European Union, and the forthcoming accession to the World Trade Organization, foreign direct investment makes one of the most important means of opening up to foreign markets and bring technology and interact with globalization, this article we will discuss the impact of FDI on economic growth in the Algerian.

Keywords: economic, development, markets, FDI, displacement, globalization

Procedia PDF Downloads 355
6832 Deep Learning-Based Classification of 3D CT Scans with Real Clinical Data; Impact of Image format

Authors: Maryam Fallahpoor, Biswajeet Pradhan

Abstract:

Background: Artificial intelligence (AI) serves as a valuable tool in mitigating the scarcity of human resources required for the evaluation and categorization of vast quantities of medical imaging data. When AI operates with optimal precision, it minimizes the demand for human interpretations and, thereby, reduces the burden on radiologists. Among various AI approaches, deep learning (DL) stands out as it obviates the need for feature extraction, a process that can impede classification, especially with intricate datasets. The advent of DL models has ushered in a new era in medical imaging, particularly in the context of COVID-19 detection. Traditional 2D imaging techniques exhibit limitations when applied to volumetric data, such as Computed Tomography (CT) scans. Medical images predominantly exist in one of two formats: neuroimaging informatics technology initiative (NIfTI) and digital imaging and communications in medicine (DICOM). Purpose: This study aims to employ DL for the classification of COVID-19-infected pulmonary patients and normal cases based on 3D CT scans while investigating the impact of image format. Material and Methods: The dataset used for model training and testing consisted of 1245 patients from IranMehr Hospital. All scans shared a matrix size of 512 × 512, although they exhibited varying slice numbers. Consequently, after loading the DICOM CT scans, image resampling and interpolation were performed to standardize the slice count. All images underwent cropping and resampling, resulting in uniform dimensions of 128 × 128 × 60. Resolution uniformity was achieved through resampling to 1 mm × 1 mm × 1 mm, and image intensities were confined to the range of (−1000, 400) Hounsfield units (HU). For classification purposes, positive pulmonary COVID-19 involvement was designated as 1, while normal images were assigned a value of 0. Subsequently, a U-net-based lung segmentation module was applied to obtain 3D segmented lung regions. The pre-processing stage included normalization, zero-centering, and shuffling. Four distinct 3D CNN models (ResNet152, ResNet50, DensNet169, and DensNet201) were employed in this study. Results: The findings revealed that the segmentation technique yielded superior results for DICOM images, which could be attributed to the potential loss of information during the conversion of original DICOM images to NIFTI format. Notably, ResNet152 and ResNet50 exhibited the highest accuracy at 90.0%, and the same models achieved the best F1 score at 87%. ResNet152 also secured the highest Area under the Curve (AUC) at 0.932. Regarding sensitivity and specificity, DensNet201 achieved the highest values at 93% and 96%, respectively. Conclusion: This study underscores the capacity of deep learning to classify COVID-19 pulmonary involvement using real 3D hospital data. The results underscore the significance of employing DICOM format 3D CT images alongside appropriate pre-processing techniques when training DL models for COVID-19 detection. This approach enhances the accuracy and reliability of diagnostic systems for COVID-19 detection.

Keywords: deep learning, COVID-19 detection, NIFTI format, DICOM format

Procedia PDF Downloads 77
6831 Correlation between Fetal Umbilical Cord pH and the Day, the Time and the Team Hand over Times: An Analysis of 6929 Deliveries of the Ulm University Hospital

Authors: Sabine Pau, Sophia Volz, Emanuel Bauer, Amelie De Gregorio, Frank Reister, Wolfgang Janni, Florian Ebner

Abstract:

Purpose: The umbilical cord pH is a well evaluated contributor for prediction of neonatal outcome. This study correlates nenonatal umbilical cord pH with the weekday of delivery, the time of birth as well as the staff hand over times (midwifes and doctors). Material and Methods: This retrospective study included all deliveries of a 20 year period (1994-2014) at our primary obstetric center. All deliveries with a newborn cord pH under 7,20 were included in this analysis (6929 of 48974 deliveries (14,4%)). Further subgroups were formed according to the pH (< 7,05; 7,05 – 7,09; 7,10 – 7,14; 7,15 – 7,19). The data were then separated in day- and night time (8am-8pm/8pm-8am) for a first analysis. Finally, handover times were defined at 6 am – 6.30 am, 2 pm -2.30 pm, 10 pm- 10.30 pm (midwives) and for the doctors 8-8.30 am, 4 – 4.30 pm (Monday- Thursday); 2 pm -2.30 pm (Friday) and 9 am – 9.30 am (weekend). Routinely a shift consists of at least three doctors as well as three midwives. Results: During the last 20 years, 6929 neonates were born with an umbilical cord ph < 7,20 ( < 7,05 : 7,1%; 7,05 – 7,09 : 10,9%; 7,10 – 7,14 : 30,2%; 7,15 – 7,19:51,8%). There was no significant difference between either night/day delivery (p = 0.408), delivery on different weekdays (p = 0.253), delivery between Monday to Thursday, Friday and the weekend (p = 0.496 ) or delivery during the handover times of the doctors as well as the midwives (p = 0.221). Even the standard deviation showed no differences between the groups. Conclusion: Despite an increased workload over the last 20 years, the standard of care remains high even during the handover times and night shifts. This applies for midwives and doctors. As the neonatal outcome depends on various factors, further studies are necessary to take more factors influencing the fetal outcome into consideration. In order to maintain this high standard of care, an adaption of work-load and changing conditions is necessary.

Keywords: delivery, fetal umbilical cord pH, day time, hand over times

Procedia PDF Downloads 309
6830 Human Vibrotactile Discrimination Thresholds for Simultaneous and Sequential Stimuli

Authors: Joanna Maj

Abstract:

Body machine interfaces (BMIs) afford users a non-invasive way coordinate movement. Vibrotactile stimulation has been incorporated into BMIs to allow feedback in real-time and guide movement control to benefit patients with cognitive deficits, such as stroke survivors. To advance research in this area, we examined vibrational discrimination thresholds at four body locations to determine suitable application sites for future multi-channel BMIs using vibration cues to guide movement planning and control. Twelve healthy adults had a pair of small vibrators (tactors) affixed to the skin at each location: forearm, shoulders, torso, and knee. A "standard" stimulus (186 Hz; 750 ms) and "probe" stimuli (11 levels ranging from 100 Hz to 235 Hz; 750 ms) were delivered. Probe and test stimulus pairs could occur sequentially or simultaneously (timing). Participants verbally indicated which stimulus felt more intense. Stimulus order was counterbalanced across tactors and body locations. Probabilities that probe stimuli felt more intense than the standard stimulus were computed and fit with a cumulative Gaussian function; the discrimination threshold was defined as one standard deviation of the underlying distribution. Threshold magnitudes depended on stimulus timing and location. Discrimination thresholds were better for stimuli applied sequentially vs. simultaneously at the torso as well as the knee. Thresholds were small (better) and relatively insensitive to timing differences for vibrations applied at the shoulder. BMI applications requiring multiple channels of simultaneous vibrotactile stimulation should therefore consider the shoulder as a deployment site for a vibrotactile BMI interface.

Keywords: electromyography, electromyogram, neuromuscular disorders, biomedical instrumentation, controls engineering

Procedia PDF Downloads 62
6829 Optimal Operation of Bakhtiari and Roudbar Dam Using Differential Evolution Algorithms

Authors: Ramin Mansouri

Abstract:

Due to the contrast of rivers discharge regime with water demands, one of the best ways to use water resources is to regulate the natural flow of the rivers and supplying water needs to construct dams. Optimal utilization of reservoirs, consideration of multiple important goals together at the same is of very high importance. To study about analyzing this method, statistical data of Bakhtiari and Roudbar dam over 46 years (1955 until 2001) is used. Initially an appropriate objective function was specified and using DE algorithm, the rule curve was developed. In continue, operation policy using rule curves was compared to standard comparative operation policy. The proposed method distributed the lack to the whole year and lowest damage was inflicted to the system. The standard deviation of monthly shortfall of each year with the proposed algorithm was less deviated than the other two methods. The Results show that median values for the coefficients of F and Cr provide the optimum situation and cause DE algorithm not to be trapped in local optimum. The most optimal answer for coefficients are 0.6 and 0.5 for F and Cr coefficients, respectively. After finding the best combination of coefficients values F and CR, algorithms for solving the independent populations were examined. For this purpose, the population of 4, 25, 50, 100, 500 and 1000 members were studied in two generations (G=50 and 100). result indicates that the generation number 200 is suitable for optimizing. The increase in time per the number of population has almost a linear trend, which indicates the effect of population in the runtime algorithm. Hence specifying suitable population to obtain an optimal results is very important. Standard operation policy had better reversibility percentage, but inflicts severe vulnerability to the system. The results obtained in years of low rainfall had very good results compared to other comparative methods.

Keywords: reservoirs, differential evolution, dam, Optimal operation

Procedia PDF Downloads 71
6828 Generalized Hyperbolic Functions: Exponential-Type Quantum Interactions

Authors: Jose Juan Peña, J. Morales, J. García-Ravelo

Abstract:

In the search of potential models applied in the theoretical treatment of diatomic molecules, some of them have been constructed by using standard hyperbolic functions as well as from the so-called q-deformed hyperbolic functions (sc q-dhf) for displacing and modifying the shape of the potential under study. In order to transcend the scope of hyperbolic functions, in this work, a kind of generalized q-deformed hyperbolic functions (g q-dhf) is presented. By a suitable transformation, through the q deformation parameter, it is shown that these g q-dhf can be expressed in terms of their corresponding standard ones besides they can be reduced to the sc q-dhf. As a useful application of the proposed approach, and considering a class of exactly solvable multi-parameter exponential-type potentials, some new q-deformed quantum interactions models that can be used as interesting alternative in quantum physics and quantum states are presented. Furthermore, due that quantum potential models are conditioned on the q-dependence of the parameters that characterize to the exponential-type potentials, it is shown that many specific cases of q-deformed potentials are obtained as particular cases from the proposal.

Keywords: diatomic molecules, exponential-type potentials, hyperbolic functions, q-deformed potentials

Procedia PDF Downloads 176
6827 Detection of Flood Prone Areas Using Multi Criteria Evaluation, Geographical Information Systems and Fuzzy Logic. The Ardas Basin Case

Authors: Vasileiou Apostolos, Theodosiou Chrysa, Tsitroulis Ioannis, Maris Fotios

Abstract:

The severity of extreme phenomena is due to their ability to cause severe damage in a small amount of time. It has been observed that floods affect the greatest number of people and induce the biggest damage when compared to the total of annual natural disasters. The detection of potential flood-prone areas constitutes one of the fundamental components of the European Natural Disaster Management Policy, directly connected to the European Directive 2007/60. The aim of the present paper is to develop a new methodology that combines geographical information, fuzzy logic and multi-criteria evaluation methods so that the most vulnerable areas are defined. Therefore, ten factors related to geophysical, morphological, climatological/meteorological and hydrological characteristics of the basin were selected. Afterwards, two models were created to detect the areas pronest to flooding. The first model defined the gravitas of each factor using Analytical Hierarchy Process (AHP) and the final map of possible flood spots were created using GIS and Boolean Algebra. The second model made use of the fuzzy logic and GIS combination and a respective map was created. The application area of the aforementioned methodologies was in Ardas basin due to the frequent and important floods that have taken place these last years. Then, the results were compared to the already observed floods. The result analysis shows that both models can detect with great precision possible flood spots. As the fuzzy logic model is less time-consuming, it is considered the ideal model to apply to other areas. The said results are capable of contributing to the delineation of high risk areas and to the creation of successful management plans dealing with floods.

Keywords: analytical hierarchy process, flood prone areas, fuzzy logic, geographic information system

Procedia PDF Downloads 370
6826 Impacts and Management of Oil Spill Pollution along the Chabahar Bay by ESI Mapping, Iran

Authors: M. Sanjarani, A. Danehkar, A. Mashincheyan, A. H. Javid, S. M. R. Fatemi

Abstract:

The oil spill in marine water has direct impact on coastal resources and community. Environmental Sensitivity Index (ESI) map is the first step to assess the potential impact of an oil spill and minimize the damage of coastal resources. In order to create Environmental Sensitivity Maps for the Chabahar bay (Iran), information has been collected in three different layers (Shoreline Classification, Biological and Human- uses resources) by means of field observations and measurements of beach morphology, personal interviews with professionals of different areas and the collection of bibliographic information. In this paper an attempt made to prepare an ESI map for sensitivity to oil spills of Chabahar bay coast. The Chabahar bay is subjected to high threaten to oil spill because of port, dense mangrove forest,only coral spot in Oman Sea and many industrial activities. Mapping the coastal resources, shoreline and coastal structures was carried out using Satellite images and GIS technology. The coastal features classified into three major categories as: Shoreline Classification, Biological and Human uses resources. The important resources classified into mangrove, Exposed tidal flats, sandy beach, etc. The sensitivity of shore was ranked as low to high (1 = low sensitivity,10 = high sensitivity) based on geomorphology of Chabahar bay coast using NOAA standards (sensitivity to oil, ease of clean up, etc). Eight ESI types were found in the area namely; ESI 1A, 1C, 3A, 6B, 7, 8B,9A and 10D. Therefore, in the study area, 50% were defined as High sensitivity, less than 1% as Medium, and 49% as low sensitivity areas. The ESI maps are useful to the oil spill responders, coastal managers and contingency planners. The overall ESI mapping product can provide a valuable management tool not only for oil spill response but for better integrated coastal zone management.

Keywords: ESI, oil spill, GIS, Chabahar Bay, Iran

Procedia PDF Downloads 355
6825 The Crisis in Ukraine and the End of the Post Cold War Security Delusions in Europe

Authors: Georgios Siachamis

Abstract:

The main objective of this paper is to examine how the crisis in Ukraine can change our perception and understanding of the strategic challenges in Europe. It will try also to address the main factors behind the beginning of the conflict in Ukraine, the miscalculations and mistakes that lead towards the escalation of the crisis and what constructive initiatives are needed to be taken in order to avoid further instability in the region. Furthermore, measures in order to develop a more stable relation with Russia are also going to be presented. Finally the implementation of a new strategic outlook for the EU is also going to be analysed.

Keywords: crisis management, European grand strategy, crisis in Ukraine, Russian policy

Procedia PDF Downloads 363
6824 Unveiling the Chaura Thrust: Insights into a Blind Out-of-Sequence Thrust in Himachal Pradesh, India

Authors: Rajkumar Ghosh

Abstract:

The Chaura Thrust, located in Himachal Pradesh, India, is a prominent geological feature that exhibits characteristics of an out-of-sequence thrust fault. This paper explores the geological setting of Himachal Pradesh, focusing on the Chaura Thrust's unique characteristics, its classification as an out-of-sequence thrust, and the implications of its presence in the region. The introduction provides background information on thrust faults and out-of-sequence thrusts, emphasizing their significance in understanding the tectonic history and deformation patterns of an area. It also outlines the objectives of the paper, which include examining the Chaura Thrust's geological features, discussing its classification as an out-of-sequence thrust, and assessing its implications for the region. The paper delves into the geological setting of Himachal Pradesh, describing the tectonic framework and providing insights into the formation of thrust faults in the region. Special attention is given to the Chaura Thrust, including its location, extent, and geometry, along with an overview of the associated rock formations and structural characteristics. The concept of out-of-sequence thrusts is introduced, defining their distinctive behavior and highlighting their importance in the understanding of geological processes. The Chaura Thrust is then analyzed in the context of an out-of-sequence thrust, examining the evidence and characteristics that support this classification. Factors contributing to the out-of-sequence behavior of the Chaura Thrust, such as stress interactions and fault interactions, are discussed. The geological implications and significance of the Chaura Thrust are explored, addressing its impact on the regional geology, tectonic evolution, and seismic hazard assessment. The paper also discusses the potential geological hazards associated with the Chaura Thrust and the need for effective mitigation strategies in the region. Future research directions and recommendations are provided, highlighting areas that warrant further investigation, such as detailed structural analyses, geodetic measurements, and geophysical surveys. The importance of continued research in understanding and managing geological hazards related to the Chaura Thrust is emphasized. In conclusion, the Chaura Thrust in Himachal Pradesh represents an out-of-sequence thrust fault that has significant implications for the region's geology and tectonic evolution. By studying the unique characteristics and behavior of the Chaura Thrust, researchers can gain valuable insights into the geological processes occurring in Himachal Pradesh and contribute to a better understanding and mitigation of seismic hazards in the area.

Keywords: chaura thrust, out-of-sequence thrust, himachal pradesh, geological setting, tectonic framework, rock formations, structural characteristics, stress interactions, fault interactions, geological implications, seismic hazard assessment, geological hazards, future research, mitigation strategies.

Procedia PDF Downloads 72
6823 Juxtaposition of the Past and the Present: A Pragmatic Stylistic Analysis of the Short Story “Too Much Happiness” by Alice Munro

Authors: Inas Hussein

Abstract:

Alice Munro is a Canadian short-story writer who has been regarded as one of the greatest writers of fiction. Owing to her great contribution to fiction, she was the first Canadian woman and the only short-story writer ever to be rewarded the Nobel Prize for Literature in 2013. Her literary works include collections of short stories and one book published as a novel. Her stories concentrate on the human condition and the human relationships as seen through the lens of daily life. The setting in most of her stories is her native Canada- small towns much similar to the one where she grew up. Her writing style is not only realistic but is also characterized by autobiographical, historical and regional features. The aim of this research is to analyze one of the key stylistic devices often adopted by Munro in her fictions: the juxtaposition of the past and the present, with reference to the title story in Munro's short story collection Too Much Happiness. The story under exploration is a brief biography of the Russian Mathematician and novelist Sophia Kovalevsky (1850 – 1891), the first woman to be appointed as a professor of Mathematics at a European University in Stockholm. Thus, the story has a historical protagonist and is set on the European continent. Munro dramatizes the severe historical and cultural constraints that hindered the career of the protagonist. A pragmatic stylistic framework is being adopted and the qualitative analysis is supported by textual reference. The stylistic analysis reveals that the juxtaposition of the past and the present is one of the distinctive features that characterize the author; in a typical Munrovian manner, the protagonist often moves between the units of time: the past, the present and, sometimes, the future. Munro's style is simple and direct but cleverly constructed and densely complicated by the presence of deeper layers and stories within the story. Findings of the research reveal that the story under investigation merits reading and analyzing. It is recommended that this story and other stories by Munro are analyzed to further explore the features of her art and style.

Keywords: Alice Munro, Too Much Happiness, style, stylistic analysis

Procedia PDF Downloads 139
6822 Markov Random Field-Based Segmentation Algorithm for Detection of Land Cover Changes Using Uninhabited Aerial Vehicle Synthetic Aperture Radar Polarimetric Images

Authors: Mehrnoosh Omati, Mahmod Reza Sahebi

Abstract:

The information on land use/land cover changing plays an essential role for environmental assessment, planning and management in regional development. Remotely sensed imagery is widely used for providing information in many change detection applications. Polarimetric Synthetic aperture radar (PolSAR) image, with the discrimination capability between different scattering mechanisms, is a powerful tool for environmental monitoring applications. This paper proposes a new boundary-based segmentation algorithm as a fundamental step for land cover change detection. In this method, first, two PolSAR images are segmented using integration of marker-controlled watershed algorithm and coupled Markov random field (MRF). Then, object-based classification is performed to determine changed/no changed image objects. Compared with pixel-based support vector machine (SVM) classifier, this novel segmentation algorithm significantly reduces the speckle effect in PolSAR images and improves the accuracy of binary classification in object-based level. The experimental results on Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) polarimetric images show a 3% and 6% improvement in overall accuracy and kappa coefficient, respectively. Also, the proposed method can correctly distinguish homogeneous image parcels.

Keywords: coupled Markov random field (MRF), environment, object-based analysis, polarimetric SAR (PolSAR) images

Procedia PDF Downloads 211
6821 Development of DNDC Modelling Method for Evaluation of Carbon Dioxide Emission from Arable Soils in European Russia

Authors: Olga Sukhoveeva

Abstract:

Carbon dioxide (CO2) is the main component of carbon biogeochemical cycle and one of the most important greenhouse gases (GHG). Agriculture, particularly arable soils, are one the largest sources of GHG emission for the atmosphere including CO2.Models may be used for estimation of GHG emission from agriculture if they can be adapted for different countries conditions. The only model used in officially at national level in United Kingdom and China for this purpose is DNDC (DeNitrification-DeComposition). In our research, the model DNDC is offered for estimation of GHG emission from arable soils in Russia. The aim of our research was to create the method of DNDC using for evaluation of CO2 emission in Russia based on official statistical information. The target territory was European part of Russia where many field experiments are located. At the first step of research the database on climate, soil and cropping characteristics for the target region from governmental, statistical, and literature sources were created. All-Russia Research Institute of Hydrometeorological Information – World Data Centre provides open daily data about average meteorological and climatic conditions. It must be calculated spatial average values of maximum and minimum air temperature and precipitation over the region. Spatial average values of soil characteristics (soil texture, bulk density, pH, soil organic carbon content) can be determined on the base of Union state register of soil recourses of Russia. Cropping technologies are published by agricultural research institutes and departments. We offer to define cropping system parameters (annual information about crop yields, amount and types of fertilizers and manure) on the base of the Federal State Statistics Service data. Content of carbon in plant biomass may be calculated via formulas developed and published by Ministry of Natural Resources and Environment of the Russian Federation. At the second step CO2 emission from soil in this region were calculated by DNDC. Modelling data were compared with empirical and literature data and good results were obtained, modelled values were equivalent to the measured ones. It was revealed that the DNDC model may be used to evaluate and forecast the CO2 emission from arable soils in Russia based on the official statistical information. Also, it can be used for creation of the program for decreasing GHG emission from arable soils to the atmosphere. Financial Support: fundamental scientific researching theme 0148-2014-0005 No 01201352499 ‘Solution of fundamental problems of analysis and forecast of Earth climatic system condition’ for 2014-2020; fundamental research program of Presidium of RAS No 51 ‘Climate change: causes, risks, consequences, problems of adaptation and regulation’ for 2018-2020.

Keywords: arable soils, carbon dioxide emission, DNDC model, European Russia

Procedia PDF Downloads 184
6820 Meanings and Concepts of Standardization in Systems Medicine

Authors: Imme Petersen, Wiebke Sick, Regine Kollek

Abstract:

In systems medicine, high-throughput technologies produce large amounts of data on different biological and pathological processes, including (disturbed) gene expressions, metabolic pathways and signaling. The large volume of data of different types, stored in separate databases and often located at different geographical sites have posed new challenges regarding data handling and processing. Tools based on bioinformatics have been developed to resolve the upcoming problems of systematizing, standardizing and integrating the various data. However, the heterogeneity of data gathered at different levels of biological complexity is still a major challenge in data analysis. To build multilayer disease modules, large and heterogeneous data of disease-related information (e.g., genotype, phenotype, environmental factors) are correlated. Therefore, a great deal of attention in systems medicine has been put on data standardization, primarily to retrieve and combine large, heterogeneous datasets into standardized and incorporated forms and structures. However, this data-centred concept of standardization in systems medicine is contrary to the debate in science and technology studies (STS) on standardization that rather emphasizes the dynamics, contexts and negotiations of standard operating procedures. Based on empirical work on research consortia that explore the molecular profile of diseases to establish systems medical approaches in the clinic in Germany, we trace how standardized data are processed and shaped by bioinformatics tools, how scientists using such data in research perceive such standard operating procedures and which consequences for knowledge production (e.g. modeling) arise from it. Hence, different concepts and meanings of standardization are explored to get a deeper insight into standard operating procedures not only in systems medicine, but also beyond.

Keywords: data, science and technology studies (STS), standardization, systems medicine

Procedia PDF Downloads 334
6819 Consultation Liasion Psychiatry in a Tertiary Care Hospital

Authors: K. Pankaj, R. K. Chaudhary, B. P. Mishra, S. Kochar

Abstract:

Introduction: Consultation-Liaison psychiatry is a branch of psychiatry that includes clinical service, teaching and research. A consultation-liaison psychiatrist plays a role in having an expert opinion and linking the patients to other medical professionals and the patient’s bio-psycho-social aspects that may be leading to his/her symptoms. Consultation-Liaison psychiatry has been recognised as 'The guardian of the holistic approach to the patient', underlining its pre-eminent role in the management of patients who are admitted in a tertiary care hospital. Aims/ Objectives: The aim of the study was to analyse the utilization of psychiatric services and reasons for referrals in a tertiary care hospital. Materials and Methods: The study was done in a tertiary care hospital. The study included all the cases referred from different Inpatient wards to the psychiatry department for consultation. The study was conducted on 300 patients over a 3 month period. International classification of diseases 10 was used to diagnose the referred cases. Results: The majority of the referral was from the Medical Intensive care unit (22%) followed by general medical wards (18.66%). Majority of the referral was taken for altered sensorium (24.66%), followed by low mood or unexplained medical symptoms (21%). Majority of the referrals had a diagnosis of alcohol withdrawal syndrome (21%) as per International classification of diseases criteria, followed by unipolar Depression and Anxiety disorder (~ 14%), followed by Schizophrenia (5%) and Polysubstance abuse (2.6%). Conclusions: Our study concludes the importance of utilization of consultation-liaison psychiatric services. Also, the study signifies the need for sensitization of our colleagues regarding psychiatric sign and symptoms from time to time and seek psychiatric consult timely to decrease morbidity.

Keywords: consultation-liaison, psychiatry, referral, tertiary care hospital

Procedia PDF Downloads 146
6818 An Investigation of the Use of Visible Spectrophotometric Analysis of Lead in an Herbal Tea Supplement

Authors: Salve Alessandria Alcantara, John Armand E. Aquino, Ma. Veronica Aranda, Nikki Francine Balde, Angeli Therese F. Cruz, Elise Danielle Garcia, Antonie Kyna Lim, Divina Gracia Lucero, Nikolai Thadeus Mappatao, Maylan N. Ocat, Jamille Dyanne L. Pajarillo, Jane Mierial A. Pesigan, Grace Kristin Viva, Jasmine Arielle C. Yap, Kathleen Michelle T. Yu, Joanna J. Orejola, Joanna V. Toralba

Abstract:

Lead is a neurotoxic metallic element that is slowly accumulated in bones and tissues especially if present in products taken in a regular basis such as herbal tea supplements. Although sensitive analytical instruments are already available, the USP limit test for lead is still widely used. However, because of its serious shortcomings, Lang Lang and his colleagues developed a spectrophotometric method for determination of lead in all types of samples. This method was the one adapted in this study. The actual procedure performed was divided into three parts: digestion, extraction and analysis. For digestion, HNO3 and CH3COOH were used. Afterwards, masking agents, 0.003% and 0.001% dithizone in CHCl3 were added and used for the extraction. For the analysis, standard addition method and colorimetry were performed. This was done in triplicates under two conditions. The 1st condition, using 25µg/mL of standard, resulted to very low absorbances with an r2 of 0.551. This led to the use of a higher concentration, 1mg/mL, for condition 2. Precipitation of lead cyanide was observed and the absorbance readings were relatively higher but between 0.15-0.25, resulting to a very low r2 of 0.429. LOQ and LOD were not computed due to the limitations of the Milton-Roy Spectrophotometer. The method performed has a shorter digestion time, and used less but more accessible reagents. However, the optimum ratio of dithizone-lead complex must be observed in order to obtain reliable results while exploring other concentration of standards.

Keywords: herbal tea supplement, lead-dithizone complex, standard addition, visible spectroscopy

Procedia PDF Downloads 378
6817 Reduction of the Cellular Infectivity of SARS-CoV-2 by a Mucoadhesive Nasal Spray

Authors: Adam M. Pitz, Gillian L. Phillipson, Jayant E. Khanolkar, Andrew M. Middleton

Abstract:

New emerging evidence suggests that the nose is the predominant route for entry of the SARS-CoV-2 virus into the host. A virucidal suspension test (conforming in principle to the European Standard EN14476) was conducted to determine whether a commercial liquid gel intranasal spray containing 1% of the mucoadhesive hydroxypropyl methylcellulose (HPMC) could inhibit the cellular infectivity of the SARS-CoV-2 coronavirus. Virus was added to the test product samples and to controls in a 1:8 ratio and mixed with one part bovine serum albumin as an interfering substance. The test samples were pre-equilibrated to 34 ± 2°C (representing the temperature of the nasopharynx) with the temperature maintained at 34 ± 2°C for virus contact times of 1, 5 and 10 minutes. Neutralized aliquots were inoculated onto host cells (Vero E6 cells, ATCC CRL-1586). The host cells were then incubated at 36 ± 2°C for a period of 7 days. The residual infectious virus in both test and controls was detected by viral-induced cytopathic effect. The 50% tissue culture infective dose per mL (TCID50/mL) was determined using the Spearman-Karber method with results reported as the reduction of the virus titer due to treatment with test product, expressed as log10. The controls confirmed the validity of the results with no cytotoxicity or viral interference observed in the neutralized test product samples. The HPMC formulation reduced SARS-CoV-2 titer, expressed as log10TCID50, by 2.30 ( ± 0.17), 2.60 ( ± 0.19), and 3.88 ( ± 0.19) with the respective contact times of 1, 5 and 10 minutes. The results demonstrate that this 1% HPMC gel formulation can reduce the cellular infectivity of the SARS-CoV-2 virus with an increasing viral inhibition observed with increasing exposure time. This 1% HMPC gel is well tolerated and can reside, when delivered via nasal spray, for up to one hour in the nasal cavity. We conclude that this intranasal gel spray with 1% HPMC repeat-dosed every few hours may offer an effective preventive or early intervention solution to limit the transmission and impact of the SARS-CoV-2 coronavirus.

Keywords: hydroxypropyl methylcellulose, mucoadhesive nasal spray, respiratory viruses, SARS-CoV-2

Procedia PDF Downloads 133
6816 Reproductive Biology and Lipid Content of Albacore Tuna (Thunnus alalunga) in the Western Indian Ocean

Authors: Zahirah Dhurmeea, Iker Zudaire, Heidi Pethybridge, Emmanuel Chassot, Maria Cedras, Natacha Nikolic, Jerome Bourjea, Wendy West, Chandani Appadoo, Nathalie Bodin

Abstract:

Scientific advice on the status of fish stocks relies on indicators that are based on strong assumptions on biological parameters such as condition, maturity and fecundity. Currently, information on the biology of albacore tuna, Thunnus alalunga, in the Indian Ocean is scarce. Consequently, many parameters used in stock assessment models for Indian Ocean albacore originate largely from other studied stocks or species of tuna. Inclusion of incorrect biological data in stock assessment models would lead to inappropriate estimates of stock status used by fisheries manager’s to establish future catch allowances. The reproductive biology of albacore tuna in the western Indian Ocean was examined through analysis of the sex ratio, spawning season, length-at-maturity (L50), spawning frequency, fecundity and fish condition. In addition, the total lipid content (TL) and lipid class composition in the gonads, liver and muscle tissues of female albacore during the reproductive cycle was investigated. A total of 923 female and 867 male albacore were sampled from 2013 to 2015. A bias in sex-ratio was found in favour of females with fork length (LF) <100 cm. Using histological analyses and gonadosomatic index, spawning was found to occur between 10°S and 30°S, mainly to the east of Madagascar from October to January. Large females contributed more to reproduction through their longer spawning period compared to small individuals. The L50 (mean ± standard error) of female albacore was estimated at 85.3 ± 0.7 cm LF at the vitellogenic 3 oocyte stage maturity threshold. Albacore spawn on average every 2.2 days within the spawning region and spawning months from November to January. Batch fecundity varied between 0.26 and 2.09 million eggs and the relative batch fecundity (mean  standard deviation) was estimated at 53.4 ± 23.2 oocytes g-1 of somatic-gutted weight. Depending on the maturity stage, TL in ovaries ranged from 7.5 to 577.8 mg g-1 of wet weight (ww) with different proportions of phospholipids (PL), wax esters (WE), triacylglycerol (TAG) and sterol (ST). The highest TL were observed in immature (mostly TAG and PL) and spawning capable ovaries (mostly PL, WE and TAG). Liver TL varied from 21.1 to 294.8 mg g-1 (ww) and acted as an energy (mainly TAG and PL) storage prior to reproduction when the lowest TL was observed. Muscle TL varied from 2.0 to 71.7 g-1 (ww) in mature females without a clear pattern between maturity stages, although higher values of up to 117.3 g-1 (ww) was found in immature females. TL results suggest that albacore could be viewed predominantly as a capital breeder relying mostly on lipids stored before the onset of reproduction and with little additional energy derived from feeding. This study is the first one to provide new information on the reproductive development and classification of albacore in the western Indian Ocean. The reproductive parameters will reduce uncertainty in current stock assessment models which will eventually promote sustainability of the fishery.

Keywords: condition, size-at-maturity, spawning behaviour, temperate tuna, total lipid content

Procedia PDF Downloads 255
6815 Markov Switching of Conditional Variance

Authors: Josip Arneric, Blanka Skrabic Peric

Abstract:

Forecasting of volatility, i.e. returns fluctuations, has been a topic of interest to portfolio managers, option traders and market makers in order to get higher profits or less risky positions. Based on the fact that volatility is time varying in high frequency data and that periods of high volatility tend to cluster, the most common used models are GARCH type models. As standard GARCH models show high volatility persistence, i.e. integrated behaviour of the conditional variance, it is difficult the predict volatility using standard GARCH models. Due to practical limitations of these models different approaches have been proposed in the literature, based on Markov switching models. In such situations models in which the parameters are allowed to change over time are more appropriate because they allow some part of the model to depend on the state of the economy. The empirical analysis demonstrates that Markov switching GARCH model resolves the problem of excessive persistence and outperforms uni-regime GARCH models in forecasting volatility for selected emerging markets.

Keywords: emerging markets, Markov switching, GARCH model, transition probabilities

Procedia PDF Downloads 449
6814 Different Cognitive Processes in Selecting Spatial Demonstratives: A Cross-Linguistic Experimental Survey

Authors: Yusuke Sugaya

Abstract:

Our research conducts a cross-linguistic experimental investigation into the cognitive processes involved in distance judgment necessary for selecting demonstratives in deictic usage. Speakers may consider the addressee's judgment or apply certain criteria for distance judgment when they produce demonstratives. While it can be assumed that there are language and cultural differences, it remains unclear how these differences manifest across languages. This research conducted online experiments involving speakers of six languages—Japanese, Spanish, Irish, English, Italian, and French—in which a wide variety of drawings were presented on a screen, varying conditions from three perspectives: addressee, comparisons, and standard. The results of the experiments revealed various distinct features associated with demonstratives in each language, highlighting differences from a comparative standpoint. For one thing, there was an influence of a specific reference point (i.e., Standard) on the selection in Japanese and Spanish, whereas there was relatively an influence of competitors in English and Italian.

Keywords: demonstratives, cross-linguistic experiment, distance judgment, social cognition

Procedia PDF Downloads 40
6813 The Systems Biology Verification Endeavor: Harness the Power of the Crowd to Address Computational and Biological Challenges

Authors: Stephanie Boue, Nicolas Sierro, Julia Hoeng, Manuel C. Peitsch

Abstract:

Systems biology relies on large numbers of data points and sophisticated methods to extract biologically meaningful signal and mechanistic understanding. For example, analyses of transcriptomics and proteomics data enable to gain insights into the molecular differences in tissues exposed to diverse stimuli or test items. Whereas the interpretation of endpoints specifically measuring a mechanism is relatively straightforward, the interpretation of big data is more complex and would benefit from comparing results obtained with diverse analysis methods. The sbv IMPROVER project was created to implement solutions to verify systems biology data, methods, and conclusions. Computational challenges leveraging the wisdom of the crowd allow benchmarking methods for specific tasks, such as signature extraction and/or samples classification. Four challenges have already been successfully conducted and confirmed that the aggregation of predictions often leads to better results than individual predictions and that methods perform best in specific contexts. Whenever the scientific question of interest does not have a gold standard, but may greatly benefit from the scientific community to come together and discuss their approaches and results, datathons are set up. The inaugural sbv IMPROVER datathon was held in Singapore on 23-24 September 2016. It allowed bioinformaticians and data scientists to consolidate their ideas and work on the most promising methods as teams, after having initially reflected on the problem on their own. The outcome is a set of visualization and analysis methods that will be shared with the scientific community via the Garuda platform, an open connectivity platform that provides a framework to navigate through different applications, databases and services in biology and medicine. We will present the results we obtained when analyzing data with our network-based method, and introduce a datathon that will take place in Japan to encourage the analysis of the same datasets with other methods to allow for the consolidation of conclusions.

Keywords: big data interpretation, datathon, systems toxicology, verification

Procedia PDF Downloads 273
6812 Design of Data Management Software System Supporting Rendezvous and Docking with Various Spaceships

Authors: Zhan Panpan, Lu Lan, Sun Yong, He Xiongwen, Yan Dong, Gu Ming

Abstract:

The function of the two spacecraft docking network, the communication and control of a docking target with various spacecrafts is realized in the space lab data management system. In order to solve the problem of the complex data communication mode between the space lab and various spaceships, and the problem of software reuse caused by non-standard protocol, a data management software system supporting rendezvous and docking with various spaceships has been designed. The software system is based on CCSDS Spcecraft Onboard Interface Service(SOIS). It consists of Software Driver Layer, Middleware Layer and Appliaction Layer. The Software Driver Layer hides the various device interfaces using the uniform device driver framework. The Middleware Layer is divided into three lays, including transfer layer, application support layer and system business layer. The communication of space lab plaform bus and the docking bus is realized in transfer layer. Application support layer provides the inter tasks communitaion and the function of unified time management for the software system. The data management software functions are realized in system business layer, which contains telemetry management service, telecontrol management service, flight status management service, rendezvous and docking management service and so on. The Appliaction Layer accomplishes the space lab data management system defined tasks using the standard interface supplied by the Middleware Layer. On the basis of layered architecture, rendezvous and docking tasks and the rendezvous and docking management service are independent in the software system. The rendezvous and docking tasks will be activated and executed according to the different spaceships. In this way, the communication management functions in the independent flight mode, the combination mode of the manned spaceship and the combination mode of the cargo spaceship are achieved separately. The software architecture designed standard appliction interface for the services in each layer. Different requirements of the space lab can be supported by the use of standard services per layer, and the scalability and flexibility of the data management software can be effectively improved. It can also dynamically expand the number and adapt to the protocol of visiting spaceships. The software system has been applied in the data management subsystem of the space lab, and has been verified in the flight of the space lab. The research results of this paper can provide the basis for the design of the data manage system in the future space station.

Keywords: space lab, rendezvous and docking, data management, software system

Procedia PDF Downloads 361
6811 Changes in Geospatial Structure of Households in the Czech Republic: Findings from Population and Housing Census

Authors: Jaroslav Kraus

Abstract:

Spatial information about demographic processes are a standard part of outputs in the Czech Republic. That was also the case of Population and Housing Census which was held on 2011. This is a starting point for a follow up study devoted to two basic types of households: single person households and households of one completed family. Single person households and one family households create more than 80 percent of all households, but the share and spatial structure is in long-term changing. The increase of single households is results of long-term fertility decrease and divorce increase, but also possibility of separate living. There are regions in the Czech Republic with traditional demographic behavior, and regions like capital Prague and some others with changing pattern. Population census is based - according to international standards - on the concept of currently living population. Three types of geospatial approaches will be used for analysis: (i) firstly measures of geographic distribution, (ii) secondly mapping clusters to identify the locations of statistically significant hot spots, cold spots, spatial outliers, and similar features and (iii) finally analyzing pattern approach as a starting point for more in-depth analyses (geospatial regression) in the future will be also applied. For analysis of this type of data, number of households by types should be distinct objects. All events in a meaningful delimited study region (e.g. municipalities) will be included in an analysis. Commonly produced measures of central tendency and spread will include: identification of the location of the center of the point set (by NUTS3 level); identification of the median center and standard distance, weighted standard distance and standard deviational ellipses will be also used. Identifying that clustering exists in census households datasets does not provide a detailed picture of the nature and pattern of clustering but will be helpful to apply simple hot-spot (and cold spot) identification techniques to such datasets. Once the spatial structure of households will be determined, any particular measure of autocorrelation can be constructed by defining a way of measuring the difference between location attribute values. The most widely used measure is Moran’s I that will be applied to municipal units where numerical ratio is calculated. Local statistics arise naturally out of any of the methods for measuring spatial autocorrelation and will be applied to development of localized variants of almost any standard summary statistic. Local Moran’s I will give an indication of household data homogeneity and diversity on a municipal level.

Keywords: census, geo-demography, households, the Czech Republic

Procedia PDF Downloads 93
6810 Multi-Stage Classification for Lung Lesion Detection on CT Scan Images Applying Medical Image Processing Technique

Authors: Behnaz Sohani, Sahand Shahalinezhad, Amir Rahmani, Aliyu Aliyu

Abstract:

Recently, medical imaging and specifically medical image processing is becoming one of the most dynamically developing areas of medical science. It has led to the emergence of new approaches in terms of the prevention, diagnosis, and treatment of various diseases. In the process of diagnosis of lung cancer, medical professionals rely on computed tomography (CT) scans, in which failure to correctly identify masses can lead to incorrect diagnosis or sampling of lung tissue. Identification and demarcation of masses in terms of detecting cancer within lung tissue are critical challenges in diagnosis. In this work, a segmentation system in image processing techniques has been applied for detection purposes. Particularly, the use and validation of a novel lung cancer detection algorithm have been presented through simulation. This has been performed employing CT images based on multilevel thresholding. The proposed technique consists of segmentation, feature extraction, and feature selection and classification. More in detail, the features with useful information are selected after featuring extraction. Eventually, the output image of lung cancer is obtained with 96.3% accuracy and 87.25%. The purpose of feature extraction applying the proposed approach is to transform the raw data into a more usable form for subsequent statistical processing. Future steps will involve employing the current feature extraction method to achieve more accurate resulting images, including further details available to machine vision systems to recognise objects in lung CT scan images.

Keywords: lung cancer detection, image segmentation, lung computed tomography (CT) images, medical image processing

Procedia PDF Downloads 91
6809 Appropriate Depth of Needle Insertion during Rhomboid Major Trigger Point Block

Authors: Seongho Jang

Abstract:

Objective: To investigate an appropriate depth of needle insertion during trigger point injection into the rhomboid major muscle. Methods: Sixty-two patients who visited our department with shoulder or upper back pain participated in this study. The distance between the skin and the rhomboid major muscle (SM) and the distance between the skin and rib (SB) were measured using ultrasonography. The subjects were divided into 3 groups according to BMI: BMI less than 23 kg/m2 (underweight or normal group); 23 kg/m2 or more to less than 25 kg/m2 (overweight group); and 25 kg/m2 or more (obese group). The mean ±standard deviation (SD) of SM and SB of each group were calculated. A range between mean+1 SD of SM and the mean-1 SD of SB was defined as a safe margin. Results: The underweight or normal group’s SM, SB, and the safe margin were 1.2±0.2, 2.1±0.4, and 1.4 to 1.7 cm, respectively. The overweight group’s SM and SB were 1.4±0.2 and 2.4±0.9 cm, respectively. The safe margin could not be calculated for this group. The obese group’s SM, SB, and the safe margin were 1.8±0.3, 2.7±0.5, and 2.1 to 2.2 cm, respectively. Conclusion: This study will help us to set the standard depth of safe needle insertion into the rhomboid major muscle in an effective manner without causing any complications.

Keywords: pneumothorax, rhomboid major muscle, trigger point injection, ultrasound

Procedia PDF Downloads 285