Search results for: two point discrimination
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5551

Search results for: two point discrimination

4441 Blueprinting of a Normalized Supply Chain Processes: Results in Implementing Normalized Software Systems

Authors: Bassam Istanbouli

Abstract:

With the technology evolving every day and with the increase in global competition, industries are always under the pressure to be the best. They need to provide good quality products at competitive prices, when and how the customer wants them.  In order to achieve this level of service, products and their respective supply chain processes need to be flexible and evolvable; otherwise changes will be extremely expensive, slow and with many combinatorial effects. Those combinatorial effects impact the whole organizational structure, from a management, financial, documentation, logistics and specially the information system Enterprise Requirement Planning (ERP) perspective. By applying the normalized system concept/theory to segments of the supply chain, we believe minimal effects, especially at the time of launching an organization global software project. The purpose of this paper is to point out that if an organization wants to develop a software from scratch or implement an existing ERP software for their business needs and if their business processes are normalized and modular then most probably this will yield to a normalized and modular software system that can be easily modified when the business evolves. Another important goal of this paper is to increase the awareness regarding the design of the business processes in a software implementation project. If the blueprints created are normalized then the software developers and configurators will use those modular blueprints to map them into modular software. This paper only prepares the ground for further studies;  the above concept will be supported by going through the steps of developing, configuring and/or implementing a software system for an organization by using two methods: The Software Development Lifecycle method (SDLC) and the Accelerated SAP implementation method (ASAP). Both methods start with the customer requirements, then blue printing of its business processes and finally mapping those processes into a software system.  Since those requirements and processes are the starting point of the implementation process, then normalizing those processes will end up in a normalizing software.

Keywords: blueprint, ERP, modular, normalized

Procedia PDF Downloads 139
4440 Changes in Geospatial Structure of Households in the Czech Republic: Findings from Population and Housing Census

Authors: Jaroslav Kraus

Abstract:

Spatial information about demographic processes are a standard part of outputs in the Czech Republic. That was also the case of Population and Housing Census which was held on 2011. This is a starting point for a follow up study devoted to two basic types of households: single person households and households of one completed family. Single person households and one family households create more than 80 percent of all households, but the share and spatial structure is in long-term changing. The increase of single households is results of long-term fertility decrease and divorce increase, but also possibility of separate living. There are regions in the Czech Republic with traditional demographic behavior, and regions like capital Prague and some others with changing pattern. Population census is based - according to international standards - on the concept of currently living population. Three types of geospatial approaches will be used for analysis: (i) firstly measures of geographic distribution, (ii) secondly mapping clusters to identify the locations of statistically significant hot spots, cold spots, spatial outliers, and similar features and (iii) finally analyzing pattern approach as a starting point for more in-depth analyses (geospatial regression) in the future will be also applied. For analysis of this type of data, number of households by types should be distinct objects. All events in a meaningful delimited study region (e.g. municipalities) will be included in an analysis. Commonly produced measures of central tendency and spread will include: identification of the location of the center of the point set (by NUTS3 level); identification of the median center and standard distance, weighted standard distance and standard deviational ellipses will be also used. Identifying that clustering exists in census households datasets does not provide a detailed picture of the nature and pattern of clustering but will be helpful to apply simple hot-spot (and cold spot) identification techniques to such datasets. Once the spatial structure of households will be determined, any particular measure of autocorrelation can be constructed by defining a way of measuring the difference between location attribute values. The most widely used measure is Moran’s I that will be applied to municipal units where numerical ratio is calculated. Local statistics arise naturally out of any of the methods for measuring spatial autocorrelation and will be applied to development of localized variants of almost any standard summary statistic. Local Moran’s I will give an indication of household data homogeneity and diversity on a municipal level.

Keywords: census, geo-demography, households, the Czech Republic

Procedia PDF Downloads 95
4439 World Peace and Conflict Resolution: A Solution from a Buddhist Point of View

Authors: Samitharathana R. Wadigala

Abstract:

The peace will not be established until the self-consciousness would reveal in the human beings. In this nuclear age, the establishment of a lasting peace on the earth represents the primary condition for the preservation of human civilization and survival of human beings. Nothing perhaps is so important and indispensable as the achievement and maintenance of peace in the modern world today. Peace in today’s world implies much more than the mere absence of war and violence. In the interdependent world of today the United Nations needs to be representative of the modern world and democratic in its functioning because it came into existence to save the generations from the scourge of war and conflict. Buddhism is the religion of peaceful co-existence and philosophy of enlightenment. Violence and conflict from the perspective of the Buddhist theory of interdependent origination (Paṭiccasamuppāda) are same with everything else in the world a product of causes and conditions. Buddhism is totally compatible with the congenial and peaceful global order. The canonical literature, doctrines, and philosophy of Buddhism are the best suited for inter-faith dialogue, harmony, and universal peace. Even today Buddhism can resurrect the universal brotherhood, peaceful co-existence and harmonious surroundings in the comity of nations. With its increasing vitality in regions around the world, many people today turn to Buddhism for relief and guidance at the time when peace seems to be a deferred dream more than ever. From a Buddhist point of view the roots of all unwholesome actions (Conflict) i. e. greed, hatred and delusion are viewed as the root cause of all human conflicts. Conflict often emanates from attachment to material things: pleasures, property, territory, wealth, economic dominance or political superiority. Buddhism has some particularly rich resources for deployment in dissolving conflict. Buddhism addresses the Buddhist perspective on the causes of conflict and ways to resolve conflict to realize world peace. The world has enough to satisfy every body’s needs but not every body’s greed.

Keywords: Buddhism, conflict-violence, peace, self-consciousness

Procedia PDF Downloads 208
4438 Demand Forecasting to Reduce Dead Stock and Loss Sales: A Case Study of the Wholesale Electric Equipment and Part Company

Authors: Korpapa Srisamai, Pawee Siriruk

Abstract:

The purpose of this study is to forecast product demands and develop appropriate and adequate procurement plans to meet customer needs and reduce costs. When the product exceeds customer demands or does not move, it requires the company to support insufficient storage spaces. Moreover, some items, when stored for a long period of time, cause deterioration to dead stock. A case study of the wholesale company of electronic equipment and components, which has uncertain customer demands, is considered. The actual purchasing orders of customers are not equal to the forecast provided by the customers. In some cases, customers have higher product demands, resulting in the product being insufficient to meet the customer's needs. However, some customers have lower demands for products than estimates, causing insufficient storage spaces and dead stock. This study aims to reduce the loss of sales opportunities and the number of remaining goods in the warehouse, citing 30 product samples of the company's most popular products. The data were collected during the duration of the study from January to October 2022. The methods used to forecast are simple moving averages, weighted moving average, and exponential smoothing methods. The economic ordering quantity and reorder point are used to calculate to meet customer needs and track results. The research results are very beneficial to the company. The company can reduce the loss of sales opportunities by 20% so that the company has enough products to meet customer needs and can reduce unused products by up to 10% dead stock. This enables the company to order products more accurately, increasing profits and storage space.

Keywords: demand forecast, reorder point, lost sale, dead stock

Procedia PDF Downloads 118
4437 Machine Learning Classification of Fused Sentinel-1 and Sentinel-2 Image Data Towards Mapping Fruit Plantations in Highly Heterogenous Landscapes

Authors: Yingisani Chabalala, Elhadi Adam, Khalid Adem Ali

Abstract:

Mapping smallholder fruit plantations using optical data is challenging due to morphological landscape heterogeneity and crop types having overlapped spectral signatures. Furthermore, cloud covers limit the use of optical sensing, especially in subtropical climates where they are persistent. This research assessed the effectiveness of Sentinel-1 (S1) and Sentinel-2 (S2) data for mapping fruit trees and co-existing land-use types by using support vector machine (SVM) and random forest (RF) classifiers independently. These classifiers were also applied to fused data from the two sensors. Feature ranks were extracted using the RF mean decrease accuracy (MDA) and forward variable selection (FVS) to identify optimal spectral windows to classify fruit trees. Based on RF MDA and FVS, the SVM classifier resulted in relatively high classification accuracy with overall accuracy (OA) = 0.91.6% and kappa coefficient = 0.91% when applied to the fused satellite data. Application of SVM to S1, S2, S2 selected variables and S1S2 fusion independently produced OA = 27.64, Kappa coefficient = 0.13%; OA= 87%, Kappa coefficient = 86.89%; OA = 69.33, Kappa coefficient = 69. %; OA = 87.01%, Kappa coefficient = 87%, respectively. Results also indicated that the optimal spectral bands for fruit tree mapping are green (B3) and SWIR_2 (B10) for S2, whereas for S1, the vertical-horizontal (VH) polarization band. Including the textural metrics from the VV channel improved crop discrimination and co-existing land use cover types. The fusion approach proved robust and well-suited for accurate smallholder fruit plantation mapping.

Keywords: smallholder agriculture, fruit trees, data fusion, precision agriculture

Procedia PDF Downloads 53
4436 Developing an Instrument to Measure Teachers’ Self-Efficacy of Teaching Innovation Skills

Authors: Huda S. Al-Azmi

Abstract:

There is a growing consensus that adoption of teachers’ self-efficacy measurement tools help to assess teachers’ abilities in specific areas in order to improve their skills. As a result, different instruments to assess teachers’ ability were developed by academics and practitioners. However, many of these instruments focused either on general teaching skills, or on the other hand, were very specific to one subject. As such, these instruments do not offer a tool to measure the ability of teachers in teaching 21st century skills such as innovation skills. Teaching innovation skills helps to prepare students for lives and careers in the 21st century. The purpose of this study is to develop an instrument measuring teachers’ self-efficacy of teaching innovation skills related to the classroom context and evaluating the teachers’ beliefs regarding their ability in teaching innovation skills. To reach this goal, the 16-item instrument measures four dimensions of innovation skills: creativity, critical thinking, communication, and collaboration. 211 secondary-school teachers filled out the survey to quantitatively analyze the quality of the instrument. The instrument’s reliability and item analysis were measured by using jMetrik. The results concluded that the mean of self-efficacy ranged from 3 to 3.6 without extreme high or low self-efficacy scores. The discrimination analysis revealed that one item recorded a negative correlation with the total, and three items recorded low correlation with the total. The reliabilities of items ranged from 0.64 to 0.69 and the instrument needed a couple of revisions before practical use. The study concluded the need to discard one item and revise five items to increase the quality of the instrument for future work.

Keywords: critical thinking, collaboration, innovation skills, self-efficacy

Procedia PDF Downloads 213
4435 Interpreting Possibilities: Teaching Without Borders

Authors: Mira Kadric

Abstract:

The proposed paper deals with a new developed approach for interpreting teaching, combining traditional didactics with a new element. The fundamental principle of the approach is taken from the theatre pedagogy (Augusto Boal`s Theatre of the Oppressed) and includes the discussion on social power relations. From the point of view of education sociology this implies strengthening students’ individual potential for self-determination on a number of levels, especially in view of the present increase in social responsibility. This knowledge constitutes a starting point and basis for the process of self-determined action. This takes place in the context of a creative didactic policy which identifies didactic goals, provides clear sequences of content, specifies interdisciplinary methods and examines their practical adequacy and ultimately serves not only individual translators and interpreters, but all parties involved. The goal of the presented didactic model is to promote independent work and problem-solving strategies; this helps to develop creative potential and self-confident behaviour. It also conveys realistic knowledge of professional reality and thus also of the real socio-political and professional parameters involved. As well as providing a discussion of fundamental questions relevant to Translation and Interpreting Studies, this also serves to improve this interdisciplinary didactic approach which simulates interpreting reality and illustrates processes and strategies which (can) take place in real life. This idea is illustrated in more detail with methods taken from the Theatre of the Oppressed created by Augusto Boal. This includes examples from (dialogue) interpreting teaching based on documentation from recordings made in a seminar in the summer term 2014.

Keywords: augusto boal, didactic model, interpreting teaching, theatre of the oppressed

Procedia PDF Downloads 429
4434 Egalitarianism and Social Stratification: An Overview of the Caste System among the Southern Muslims of Sri Lanka

Authors: Mohamed Faslan

Abstract:

This paper describes how caste-based differentiation functions among the Southern Muslims of Sri Lanka despite Islamic egalitarian principles. Such differences are not promoted by religious teachings, mosques, or the various Islamic religious denominations. Instead, it underpins a hereditary, hierarchical stratification in social structure. Since Islam is against social stratification and promotes egalitarianism, what are the persuasive social structures that organize the existing caste system among Southern Muslims? To answer this puzzle, this paper discusses and analyses the caste system under these five subsections: ancestry; marriage; geography; mosque ownership or trustees; and occupation. The study of caste in Sri Lanka is generally compartmentalized into separate Sinhala and Tamil systems. Most caste studies have focused on the characteristics, upward mobility, or discrimination of specific castes in relation to other castes within ethnic systems. As an operational definition, in this paper, by “southern” or the south of Sri Lanka, I refer to the Kalutara, Galle and Matara Districts. This research was conducted in these three districts, and the respondents were selected purposively. Community history interviews were used as a tool for collecting information, and grounded theory used for analysis. Caste stratification among the Southern Muslims of Sri Lanka is directly connected to whether they are descended from Arab or South Indian ancestors. Arab ancestors are considered upper caste and South Indian ancestors are considered lower caste. Endogamy is the most serious driving factor keeping caste system functioning among Muslims while the other factors—geography, mosques, and occupations—work as supporting factors.

Keywords: caste, social stratification, Sri Lanka Muslims, endogamy

Procedia PDF Downloads 173
4433 A New Center of Motion in Cabling Robots

Authors: Alireza Abbasi Moshaii, Farshid Najafi

Abstract:

In this paper a new model for centre of motion creating is proposed. This new method uses cables. So, it is very useful in robots because it is light and has easy assembling process. In the robots which need to be in touch with some things this method is very good. It will be described in the following. The accuracy of the idea is proved by an experiment. This system could be used in the robots which need a fixed point in the contact with some things and make a circular motion. Such as dancer, physician or repair robots.

Keywords: centre of motion, robotic cables, permanent touching, mechatronics engineering

Procedia PDF Downloads 440
4432 Laboratory Scale Purification of Water from Copper Waste

Authors: Mumtaz Khan, Adeel Shahid, Waqas Khan

Abstract:

Heavy metals presence in water streams is a big danger for aquatic life and ultimately effects human health. Removal of copper (Cu) by ispaghula husk, maize fibre, and maize oil cake from synthetic solution in batch conditions was studied. Different experimental parameters such as contact time, initial solution pH, agitation rate, initial Cu concentration, biosorbent concentration, and biosorbent particle size has been studied to quantify the Cu biosorption. The rate of adsorption of metal ions was very fast at the beginning and became slow after reaching the saturation point, followed by a slower active metabolic uptake of metal ions into the cells. Up to a certain point, (pH=4, concentration of Cu = ~ 640 mg/l, agitation rate = ~ 400 rpm, biosorbent concentration = ~ 0.5g, 3g, 3g for ispaghula husk, maize fiber and maize oil cake, respectively) increasing the pH, concentration of Cu, agitation rate, and biosorbent concentration, increased the biosorption rate; however the sorption capacity increased by decreasing the particle size. At optimized experimental parameters, the maximum Cu biosorption by ispaghula husk, maize fibre and maize oil cake were 86.7%, 59.6% and 71.3%, respectively. Moreover, the results of the kinetics studies demonstrated that the biosorption of copper on ispaghula husk, maize fibre, and maize oil cake followed pseudo-second order kinetics. The results of adsorption were fitted to both the Langmuir and Freundlich models. The Langmuir model represented the sorption process better than Freundlich, and R² value ~ 0.978. Optimizations of physical and environmental parameters revealed, ispaghula husk as more potent copper biosorbent than maize fibre, and maize oil cake. The sorbent is cheap and available easily, so this study can be applied to remove Cu impurities on pilot and industrial scale after certain modifications.

Keywords: biosorption, copper, ispaghula husk, maize fibre, maize oil cake, purification

Procedia PDF Downloads 408
4431 The Evaluation for Interfacial Adhesion between SOFC and Metal Adhesive in the High Temperature Environment

Authors: Sang Koo Jeon, Seung Hoon Nahm, Oh Heon Kwon

Abstract:

The unit cell of solid oxide fuel cell (SOFC) must be stacked as several layers type to obtain the high power. The most of researcher have concerned about the performance of stacked SOFC rather than the structural stability of stacked SOFC and especially interested how to design for reducing the electrical loss and improving the high efficiency. Consequently, the stacked SOFC able to produce the electrical high power and related parts like as manifold, gas seal, bipolar plate were developed to optimize the stack design. However, the unit cell of SOFC was just layered on the interconnector without the adhesion and the hydrogen and oxygen were injected to the interfacial layer in the high temperature. On the operating condition, the interfacial layer can be the one of the weak point in the stacked SOFC. Therefore the evaluation of the structural safety for the failure is essentially needed. In this study, interfacial adhesion between SOFC and metal adhesive was estimated in the high temperature environment. The metal adhesive was used to strongly connect the unit cell of SOFC with interconnector and provide the electrical conductivity between them. The four point bending test was performed to measure the interfacial adhesion. The unit cell of SOFC and SiO2 wafer were diced and then attached by metal adhesive. The SiO2 wafer had the center notch to initiate a crack from the tip of the notch. The modified stereomicroscope combined with the CCD camera and system for measuring the length was used to observe the fracture behavior. Additionally, the interfacial adhesion was evaluated in the high temperature condition because the metal adhesive was affected by high temperature. Also the specimen was exposed in the furnace during several hours and then the interfacial adhesion was evaluated. Finally, the interfacial adhesion energy was quantitatively determined and compared in the each condition.

Keywords: solid oxide fuel cell (SOFC), metal adhesive, adhesion, high temperature

Procedia PDF Downloads 520
4430 Amblyopia and Eccentric Fixation

Authors: Kristine Kalnica-Dorosenko, Aiga Svede

Abstract:

Amblyopia or 'lazy eye' is impaired or dim vision without obvious defect or change in the eye. It is often associated with abnormal visual experience, most commonly strabismus, anisometropia or both, and form deprivation. The main task of amblyopia treatment is to ameliorate etiological factors to create a clear retinal image and, to ensure the participation of the amblyopic eye in the visual process. The treatment of amblyopia and eccentric fixation is usually associated with problems in the therapy. Eccentric fixation is present in around 44% of all patients with amblyopia and in 30% of patients with strabismic amblyopia. In Latvia, amblyopia is carefully treated in various clinics, but eccentricity diagnosis is relatively rare. Conflict which has developed relating to the relationship between the visual disorder and the degree of eccentric fixation in amblyopia should to be rethoughted, because it has an important bearing on the cause and treatment of amblyopia, and the role of the eccentric fixation in this case. Visuoscopy is the most frequently used method for determination of eccentric fixation. With traditional visuoscopy, a fixation target is projected onto the patient retina, and the examiner asks to look straight directly at the center of the target. An optometrist then observes the point on the macula used for fixation. This objective test provides clinicians with direct observation of the fixation point of the eye. It requires patients to voluntarily fixate the target and assumes the foveal reflex accurately demarcates the center of the foveal pit. In the end, by having a very simple method to evaluate fixation, it is possible to indirectly evaluate treatment improvement, as eccentric fixation is always associated with reduced visual acuity. So, one may expect that if eccentric fixation in amlyopic eye is found with visuoscopy, then visual acuity should be less than 1.0 (in decimal units). With occlusion or another amblyopia therapy, one would expect both visual acuity and fixation to improve simultaneously, that is fixation would become more central. Consequently, improvement in fixation pattern by treatment is an indirect measurement of improvement of visual acuity. Evaluation of eccentric fixation in the child may be helpful in identifying amblyopia in children prior to measurement of visual acuity. This is very important because the earlier amblyopia is diagnosed – the better the chance of improving visual acuity.

Keywords: amblyopia, eccentric fixation, visual acuity, visuoscopy

Procedia PDF Downloads 157
4429 Sensitivity Assessment of Spectral Salinity Indices over Desert Sabkha of Western UAE

Authors: Rubab Ammad, Abdelgadir Abuelgasim

Abstract:

UAE typically lies in one of the aridest regions of the world and is thus home to geologic features common to such climatic conditions including vast open deserts, sand dunes, saline soils, inland Sabkha and coastal Sabkha. Sabkha are characteristic salt flats formed in arid environment due to deposition and precipitation of salt and silt over sand surface because of low laying water table and rates of evaporation exceeding rates of precipitation. The study area, which comprises of western UAE, is heavily concentrated with inland Sabkha. Remote sensing is conventionally used to study the soil salinity of agriculturally degraded lands but not so broadly for Sabkha. The focus of this study was to identify these highly saline Sabkha areas on remotely sensed data, using salinity indices. The existing salinity indices in the literature have been designed for agricultural soils and they have not frequently used the spectral response of short-wave infra-red (SWIR1 and SWIR2) parts of electromagnetic spectrum. Using Landsat 8 OLI data and field ground truthing, this study formulated indices utilizing NIR-SWIR parts of spectrum and compared the results with existing salinity indices. Most indices depict reasonably good relationship between salinity and spectral index up until a certain value of salinity after which the reflectance reaches a saturation point. This saturation point varies with index. However, the study findings suggest a role of incorporating near infra-red and short-wave infra-red in salinity index with a potential of showing a positive relationship between salinity and reflectance up to a higher salinity value, compared to rest.

Keywords: Sabkha, salinity index, saline soils, Landsat 8, SWIR1, SWIR2, UAE desert

Procedia PDF Downloads 209
4428 The Psychology of Cross-Cultural Communication: A Socio-Linguistics Perspective

Authors: Tangyie Evani, Edmond Biloa, Emmanuel Nforbi, Lem Lilian Atanga, Kom Beatrice

Abstract:

The dynamics of languages in contact necessitates a close study of how its users negotiate meanings from shared values in the process of cross-cultural communication. A transverse analysis of the situation demonstrates the existence of complex efforts on connecting cultural knowledge to cross-linguistic competencies within a widening range of communicative exchanges. This paper sets to examine the psychology of cross-cultural communication in a multi-linguistic setting like Cameroon where many local and international languages are in close contact. The paper equally analyses the pertinence of existing macro sociological concepts as fundamental knowledge traits in literal and idiomatic cross semantic mapping. From this point, the article presents a path model of connecting sociolinguistics to the increasing adoption of a widening range of communicative genre piloted by the on-going globalisation trends with its high-speed information technology machinery. By applying a cross cultural analysis frame, the paper will be contributing to a better understanding of the fundamental changes in the nature and goals of cross-cultural knowledge in pragmatics of communication and cultural acceptability’s. It emphasises on the point that, in an era of increasing global interchange, a comprehensive inclusive global culture through bridging gaps in cross-cultural communication would have significant potentials to contribute to achieving global social development goals, if inadequacies in language constructs are adjusted to create avenues that intertwine with sociocultural beliefs, ensuring that meaningful and context bound sociolinguistic values are observed within the global arena of communication.

Keywords: cross-cultural communication, customary language, literalisms, primary meaning, subclasses, transubstantiation

Procedia PDF Downloads 283
4427 Dynamic and Thermal Characteristics of Three-Dimensional Turbulent Offset Jet

Authors: Ali Assoudi, Sabra Habli, Nejla Mahjoub Saïd, Philippe Bournot, Georges Le Palec

Abstract:

Studying the flow characteristics of a turbulent offset jet is an important topic among researchers across the world because of its various engineering applications. Some of the common examples include: injection and carburetor systems, entrainment and mixing process in gas turbine and boiler combustion chambers, Thrust-augmenting ejectors for V/STOL aircrafts and HVAC systems, environmental dischargers, film cooling and many others. An offset jet is formed when a jet discharges into a medium above a horizontal solid wall parallel to the axis of the jet exit but which is offset by a certain distance. The structure of a turbulent offset-jet can be described by three main regions. Close to the nozzle exit, an offset jet possesses characteristic features similar to those of free jets. Then, the entrainment of fluid between the jet, the offset wall and the bottom wall creates a low pressure zone, forcing the jet to deflect towards the wall and eventually attaches to it at the impingement point. This is referred to as the Coanda effect. Further downstream after the reattachment point, the offset jet has the characteristics of a wall jet flow. Therefore, the offset jet has characteristics of free, impingement and wall jets, and it is relatively more complex compared to these types of flows. The present study examines the dynamic and thermal evolution of a 3D turbulent offset jet with different offset height ratio (the ratio of the distance from the jet exit to the impingement bottom wall and the jet nozzle diameter). To achieve this purpose a numerical study was conducted to investigate a three-dimensional offset jet flow through the resolution of the different governing Navier–Stokes’ equations by means of the finite volume method and the RSM second-order turbulent closure model. A detailed discussion has been provided on the flow and thermal characteristics in the form of streamlines, mean velocity vector, pressure field and Reynolds stresses.

Keywords: offset jet, offset ratio, numerical simulation, RSM

Procedia PDF Downloads 303
4426 Comparison of Cervical Length Using Transvaginal Ultrasonography and Bishop Score to Predict Succesful Induction

Authors: Lubena Achmad, Herman Kristanto, Julian Dewantiningrum

Abstract:

Background: The Bishop score is a standard method used to predict the success of induction. This examination tends to be subjective with high inter and intraobserver variability, so it was presumed to have a low predictive value in terms of the outcome of labor induction. Cervical length measurement using transvaginal ultrasound is considered to be more objective to assess the cervical length. Meanwhile, this examination is not a complicated procedure and less invasive than vaginal touché. Objective: To compare transvaginal ultrasound and Bishop score in predicting successful induction. Methods: This study was a prospective cohort study. One hundred and twenty women with singleton pregnancies undergoing induction of labor at 37 – 42 weeks and met inclusion and exclusion criteria were enrolled in this study. Cervical assessment by both transvaginal ultrasound and Bishop score were conducted prior induction. The success of labor induction was defined as an ability to achieve active phase ≤ 12 hours after induction. To figure out the best cut-off point of cervical length and Bishop score, receiver operating characteristic (ROC) curves were plotted. Logistic regression analysis was used to determine which factors best-predicted induction success. Results: This study showed significant differences in terms of age, premature rupture of the membrane, the Bishop score, cervical length and funneling as significant predictors of successful induction. Using ROC curves found that the best cut-off point for prediction of successful induction was 25.45 mm for cervical length and 3 for Bishop score. Logistic regression was performed and showed only premature rupture of membranes and cervical length ≤ 25.45 that significantly predicted the success of labor induction. By excluding premature rupture of the membrane as the indication of induction, cervical length less than 25.3 mm was a better predictor of successful induction. Conclusion: Compared to Bishop score, cervical length using transvaginal ultrasound was a better predictor of successful induction.

Keywords: Bishop Score, cervical length, induction, successful induction, transvaginal sonography

Procedia PDF Downloads 324
4425 Bioinformatics Approach to Support Genetic Research in Autism in Mali

Authors: M. Kouyate, M. Sangare, S. Samake, S. Keita, H. G. Kim, D. H. Geschwind

Abstract:

Background & Objectives: Human genetic studies can be expensive, even unaffordable, in developing countries, partly due to the sequencing costs. Our aim is to pilot the use of bioinformatics tools to guide scientifically valid, locally relevant, and economically sound autism genetic research in Mali. Methods: The following databases, NCBI, HGMD, and LSDB, were used to identify hot point mutations. Phenotype, transmission pattern, theoretical protein expression in the brain, the impact of the mutation on the 3D structure of the protein) were used to prioritize selected autism genes. We used the protein database, Modeller, and clustal W. Results: We found Mef2c (Gly27Ala/Leu38Gln), Pten (Thr131IIle), Prodh (Leu289Met), Nme1 (Ser120Gly), and Dhcr7 (Pro227Thr/Glu224Lys). These mutations were associated with endonucleases BseRI, NspI, PfrJS2IV, BspGI, BsaBI, and SpoDI, respectively. Gly27Ala/Leu38Gln mutations impacted the 3D structure of the Mef2c protein. Mef2c protein sequences across species showed a high percentage of similarity with a highly conserved MADS domain. Discussion: Mef2c, Pten, Prodh, Nme1, and Dhcr 7 gene mutation frequencies in the Malian population will be very informative. PCR coupled with restriction enzyme digestion can be used to screen the targeted gene mutations. Sanger sequencing will be used for confirmation only. This will cut down considerably the sequencing cost for gene-to-gene mutation screening. The knowledge of the 3D structure and potential impact of the mutations on Mef2c protein informed the protein family and altered function (ex. Leu38Gln). Conclusion & Future Work: Bio-informatics will positively impact autism research in Mali. Our approach can be applied to another neuropsychiatric disorder.

Keywords: bioinformatics, endonucleases, autism, Sanger sequencing, point mutations

Procedia PDF Downloads 81
4424 Under-Reporting and Under-Recording of Hate Crimes against Muslim Women in Italy

Authors: Broccolo Cinzia, Grigaliunaite Ruta, Saint-Nom Cloé, Savasta Guido

Abstract:

The present article analyses the root causes of under-reporting and under-recording of hate crimes against Muslim women in Italy. The main findings emerged from the survey conducted between May and September 2022 within the framework of the TRUST project (co-funded by the CERV programme (CERV-2021-EQUAL) of the European Union) with relevant practitioners and members of the Muslim community, including first-generation and second-generation Muslim women residing in Italy. The findings reveal that multiple factors contribute to the low reporting rate as well as to the flaws in recording episodes of intolerance and hatred against the above-mentioned group. Lack of trust in the judiciary or the police may represent one of the main causes of under-reporting; however, the phenomenon is not limited to such aspects, and additional factors and sources of discrimination paving the way to under-recording have been identified during the survey. The significant “tendency” to not report a case of intolerance as the difficulties in identifying the discriminatory nature of the crime are two faces of the same coin and are particularly intertwined; despite this, at first, both issues need to be assessed and analysed separately in order to take their own specificities into duly consideration. By contrast, the potential solution to low recording and reporting trends should be found collectively, namely by involving all the relevant parties and bodies facing the above-mentioned issues. In this regard, a participatory and multi-agency approach may curb the root causes leading Muslim women not to report and, besides this, support law enforcement officials as well as public authorities in providing a more effective service to the victims of hatred, whether offline or online.

Keywords: hate crime, under-reporting, under-recording, Islamophobia, Muslim women

Procedia PDF Downloads 105
4423 Study on Co-Relation of Prostate Specific Antigen with Metastatic Bone Disease in Prostate Cancer on Skeletal Scintigraphy

Authors: Muhammad Waleed Asfandyar, Akhtar Ahmed, Syed Adib-ul-Hasan Rizvi

Abstract:

Objective: To evaluate the ability of serum concentration of prostate specific antigen between two cutting points considering it as a predictor of skeletal metastasis on bone scintigraphy in men with prostate cancer. Settings: This study was carried out in department of Nuclear Medicine at Sindh Institute of Urology and Transplantation (SIUT) Karachi, Pakistan. Materials and Method: From August 2013 to November 2013, forty two (42) consecutive patients with prostate cancer who underwent technetium-99m methylene diphosphonate (Tc-99mMDP) whole body bone scintigraphy were prospectively analyzed. The information was collected from the scintigraphic database at a Nuclear medicine department Sindh institute of urology and transplantation Karachi Pakistan. Patients who did not have a serum PSA concentration available within 1 month before or after the time of performing the Tc-99m MDP whole body bone scintigraphy were excluded from this study. A whole body bone scintigraphy scan (from the toes to top of the head) was performed using a whole-body Moving gamma camera technique (anterior and posterior) 2–4 hours after intravenous injection of 20 mCi of Tc-99m MDP. In addition, all patients necessarily have a pathological report available. Bony metastases were determined from the bone scan studies and no further correlation with histopathology or other imaging modalities were performed. To preserve patient confidentiality, direct patient identifiers were not collected. In all the patients, Prostate specific antigen values and skeletal scintigraphy were evaluated. Results: The mean age, mean PSA, and incidence of bone metastasis on bone scintigraphy were 68.35 years, 370.51 ng/mL and 19/42 (45.23%) respectively. According to PSA levels, patients were divided into 5 groups < 10ng/mL (10/42), 10-20 ng/mL (5/42), 20-50 ng/mL (2/42), 50-100 (3/42), 100- 500ng/mL (3/42) and more than 500ng/mL (0/42) presenting negative bone scan. The incidence of positive bone scan (%) for bone metastasis for each group were O1 patient (5.26%), 0%, 03 patients (15.78%), 01 patient (5.26%), 04 patients (21.05%), and 10 patients (52.63%) respectively. From the 42 patients 19 (45.23%) presented positive scintigraphic examination for the presence of bone metastasis. 1 patient presented bone metastasis on bone scintigraphy having PSA level less than 10ng/mL, and in only 1 patient (5.26%) with bone metastasis PSA concentration was less than 20 ng/mL. therefore, when the cutting point adopted for PSA serum concentration was 10ng/mL, a negative predictive value for bone metastasis was 95% with sensitivity rates 94.74% and the positive predictive value and specificities of the method were 56.53% and 43.48% respectively. When the cutting point of PSA serum concentration was 20ng/mL the observed results for Positive predictive value and specificity were (78.27% and 65.22% respectively) whereas negative predictive value and sensitivity stood (100% and 95%) respectively. Conclusion: Results of our study allow us to conclude that serum PSA concentration of higher than 20ng/mL was the most accurate cutting point than a serum concentration of PSA higher than 10ng/mL to predict metastasis in radionuclide bone scintigraphy. In this way, unnecessary cost can be avoided, since a considerable part of prostate adenocarcinomas present low serum PSA levels less than 20 ng/mL and for these cases radionuclide bone scintigraphy could be unnecessary.

Keywords: bone scan, cut off value, prostate specific antigen value, scintigraphy

Procedia PDF Downloads 318
4422 An Exponential Field Path Planning Method for Mobile Robots Integrated with Visual Perception

Authors: Magdy Roman, Mostafa Shoeib, Mostafa Rostom

Abstract:

Global vision, whether provided by overhead fixed cameras, on-board aerial vehicle cameras, or satellite images can always provide detailed information on the environment around mobile robots. In this paper, an intelligent vision-based method of path planning and obstacle avoidance for mobile robots is presented. The method integrates visual perception with a new proposed field-based path-planning method to overcome common path-planning problems such as local minima, unreachable destination and unnecessary lengthy paths around obstacles. The method proposes an exponential angle deviation field around each obstacle that affects the orientation of a close robot. As the robot directs toward, the goal point obstacles are classified into right and left groups, and a deviation angle is exponentially added or subtracted to the orientation of the robot. Exponential field parameters are chosen based on Lyapunov stability criterion to guarantee robot convergence to the destination. The proposed method uses obstacles' shape and location, extracted from global vision system, through a collision prediction mechanism to decide whether to activate or deactivate obstacles field. In addition, a search mechanism is developed in case of robot or goal point is trapped among obstacles to find suitable exit or entrance. The proposed algorithm is validated both in simulation and through experiments. The algorithm shows effectiveness in obstacles' avoidance and destination convergence, overcoming common path planning problems found in classical methods.

Keywords: path planning, collision avoidance, convergence, computer vision, mobile robots

Procedia PDF Downloads 192
4421 Learning from Dendrites: Improving the Point Neuron Model

Authors: Alexander Vandesompele, Joni Dambre

Abstract:

The diversity in dendritic arborization, as first illustrated by Santiago Ramon y Cajal, has always suggested a role for dendrites in the functionality of neurons. In the past decades, thanks to new recording techniques and optical stimulation methods, it has become clear that dendrites are not merely passive electrical components. They are observed to integrate inputs in a non-linear fashion and actively participate in computations. Regardless, in simulations of neural networks dendritic structure and functionality are often overlooked. Especially in a machine learning context, when designing artificial neural networks, point neuron models such as the leaky-integrate-and-fire (LIF) model are dominant. These models mimic the integration of inputs at the neuron soma, and ignore the existence of dendrites. In this work, the LIF point neuron model is extended with a simple form of dendritic computation. This gives the LIF neuron increased capacity to discriminate spatiotemporal input sequences, a dendritic functionality as observed in another study. Simulations of the spiking neurons are performed using the Bindsnet framework. In the common LIF model, incoming synapses are independent. Here, we introduce a dependency between incoming synapses such that the post-synaptic impact of a spike is not only determined by the weight of the synapse, but also by the activity of other synapses. This is a form of short term plasticity where synapses are potentiated or depressed by the preceding activity of neighbouring synapses. This is a straightforward way to prevent inputs from simply summing linearly at the soma. To implement this, each pair of synapses on a neuron is assigned a variable,representing the synaptic relation. This variable determines the magnitude ofthe short term plasticity. These variables can be chosen randomly or, more interestingly, can be learned using a form of Hebbian learning. We use Spike-Time-Dependent-Plasticity (STDP), commonly used to learn synaptic strength magnitudes. If all neurons in a layer receive the same input, they tend to learn the same through STDP. Adding inhibitory connections between the neurons creates a winner-take-all (WTA) network. This causes the different neurons to learn different input sequences. To illustrate the impact of the proposed dendritic mechanism, even without learning, we attach five input neurons to two output neurons. One output neuron isa regular LIF neuron, the other output neuron is a LIF neuron with dendritic relationships. Then, the five input neurons are allowed to fire in a particular order. The membrane potentials are reset and subsequently the five input neurons are fired in the reversed order. As the regular LIF neuron linearly integrates its inputs at the soma, the membrane potential response to both sequences is similar in magnitude. In the other output neuron, due to the dendritic mechanism, the membrane potential response is different for both sequences. Hence, the dendritic mechanism improves the neuron’s capacity for discriminating spa-tiotemporal sequences. Dendritic computations improve LIF neurons even if the relationships between synapses are established randomly. Ideally however, a learning rule is used to improve the dendritic relationships based on input data. It is possible to learn synaptic strength with STDP, to make a neuron more sensitive to its input. Similarly, it is possible to learn dendritic relationships with STDP, to make the neuron more sensitive to spatiotemporal input sequences. Feeding structured data to a WTA network with dendritic computation leads to a significantly higher number of discriminated input patterns. Without the dendritic computation, output neurons are less specific and may, for instance, be activated by a sequence in reverse order.

Keywords: dendritic computation, spiking neural networks, point neuron model

Procedia PDF Downloads 132
4420 Predicting Match Outcomes in Team Sport via Machine Learning: Evidence from National Basketball Association

Authors: Jacky Liu

Abstract:

This paper develops a team sports outcome prediction system with potential for wide-ranging applications across various disciplines. Despite significant advancements in predictive analytics, existing studies in sports outcome predictions possess considerable limitations, including insufficient feature engineering and underutilization of advanced machine learning techniques, among others. To address these issues, we extend the Sports Cross Industry Standard Process for Data Mining (SRP-CRISP-DM) framework and propose a unique, comprehensive predictive system, using National Basketball Association (NBA) data as an example to test this extended framework. Our approach follows a holistic methodology in feature engineering, employing both Time Series and Non-Time Series Data, as well as conducting Explanatory Data Analysis and Feature Selection. Furthermore, we contribute to the discourse on target variable choice in team sports outcome prediction, asserting that point spread prediction yields higher profits as opposed to game-winner predictions. Using machine learning algorithms, particularly XGBoost, results in a significant improvement in predictive accuracy of team sports outcomes. Applied to point spread betting strategies, it offers an astounding annual return of approximately 900% on an initial investment of $100. Our findings not only contribute to academic literature, but have critical practical implications for sports betting. Our study advances the understanding of team sports outcome prediction a burgeoning are in complex system predictions and pave the way for potential profitability and more informed decision making in sports betting markets.

Keywords: machine learning, team sports, game outcome prediction, sports betting, profits simulation

Procedia PDF Downloads 101
4419 Implementation of Fuzzy Version of Block Backward Differentiation Formulas for Solving Fuzzy Differential Equations

Authors: Z. B. Ibrahim, N. Ismail, K. I. Othman

Abstract:

Fuzzy Differential Equations (FDEs) play an important role in modelling many real life phenomena. The FDEs are used to model the behaviour of the problems that are subjected to uncertainty, vague or imprecise information that constantly arise in mathematical models in various branches of science and engineering. These uncertainties have to be taken into account in order to obtain a more realistic model and many of these models are often difficult and sometimes impossible to obtain the analytic solutions. Thus, many authors have attempted to extend or modified the existing numerical methods developed for solving Ordinary Differential Equations (ODEs) into fuzzy version in order to suit for solving the FDEs. Therefore, in this paper, we proposed the development of a fuzzy version of three-point block method based on Block Backward Differentiation Formulas (FBBDF) for the numerical solution of first order FDEs. The three-point block FBBDF method are implemented in uniform step size produces three new approximations simultaneously at each integration step using the same back values. Newton iteration of the FBBDF is formulated and the implementation is based on the predictor and corrector formulas in the PECE mode. For greater efficiency of the block method, the coefficients of the FBBDF are stored at the start of the program. The proposed FBBDF is validated through numerical results on some standard problems found in the literature and comparisons are made with the existing fuzzy version of the Modified Simpson and Euler methods in terms of the accuracy of the approximated solutions. The numerical results show that the FBBDF method performs better in terms of accuracy when compared to the Euler method when solving the FDEs.

Keywords: block, backward differentiation formulas, first order, fuzzy differential equations

Procedia PDF Downloads 318
4418 A Novel Method for Face Detection

Authors: H. Abas Nejad, A. R. Teymoori

Abstract:

Facial expression recognition is one of the open problems in computer vision. Robust neutral face recognition in real time is a major challenge for various supervised learning based facial expression recognition methods. This is due to the fact that supervised methods cannot accommodate all appearance variability across the faces with respect to race, pose, lighting, facial biases, etc. in the limited amount of training data. Moreover, processing each and every frame to classify emotions is not required, as the user stays neutral for the majority of the time in usual applications like video chat or photo album/web browsing. Detecting neutral state at an early stage, thereby bypassing those frames from emotion classification would save the computational power. In this work, we propose a light-weight neutral vs. emotion classification engine, which acts as a preprocessor to the traditional supervised emotion classification approaches. It dynamically learns neutral appearance at Key Emotion (KE) points using a textural statistical model, constructed by a set of reference neutral frames for each user. The proposed method is made robust to various types of user head motions by accounting for affine distortions based on a textural statistical model. Robustness to dynamic shift of KE points is achieved by evaluating the similarities on a subset of neighborhood patches around each KE point using the prior information regarding the directionality of specific facial action units acting on the respective KE point. The proposed method, as a result, improves ER accuracy and simultaneously reduces the computational complexity of ER system, as validated on multiple databases.

Keywords: neutral vs. emotion classification, Constrained Local Model, procrustes analysis, Local Binary Pattern Histogram, statistical model

Procedia PDF Downloads 336
4417 A Framework for Teaching the Intracranial Pressure Measurement through an Experimental Model

Authors: Christina Klippel, Lucia Pezzi, Silvio Neto, Rafael Bertani, Priscila Mendes, Flavio Machado, Aline Szeliga, Maria Cosendey, Adilson Mariz, Raquel Santos, Lys Bendett, Pedro Velasco, Thalita Rolleigh, Bruna Bellote, Daria Coelho, Bruna Martins, Julia Almeida, Juliana Cerqueira

Abstract:

This project presents a framework for teaching intracranial pressure monitoring (ICP) concepts using a low-cost experimental model in a neurointensive care education program. Data concerning ICP monitoring contribute to the patient's clinical assessment and may dictate the course of action of a health team (nursing, medical staff) and influence decisions to determine the appropriate intervention. This study aims to present a safe method for teaching ICP monitoring to medical students in a Simulation Center. Methodology: Medical school teachers, along with students from the 4th year, built an experimental model for teaching ICP measurement. The model consists of a mannequin's head with a plastic bag inside simulating the cerebral ventricle and an inserted ventricular catheter connected to the ICP monitoring system. The bag simulating the ventricle can also be changed for others containing bloody or infected simulated cerebrospinal fluid. On the mannequin's ear, there is a blue point indicating the right place to set the "zero point" for accurate pressure reading. The educational program includes four steps: 1st - Students receive a script on ICP measurement for reading before training; 2nd - Students watch a video about the subject created in the Simulation Center demonstrating each step of the ICP monitoring and the proper care, such as: correct positioning of the patient, anatomical structures to establish the zero point for ICP measurement and a secure range of ICP; 3rd - Students train the procedure in the model. Teachers help students during training; 4th - Student assessment based on a checklist form. Feedback and correction of wrong actions. Results: Students expressed interest in learning ICP monitoring. Tests concerning the hit rate are still being performed. ICP's final results and video will be shown at the event. Conclusion: The study of intracranial pressure measurement based on an experimental model consists of an effective and controlled method of learning and research, more appropriate for teaching neurointensive care practices. Assessment based on a checklist form helps teachers keep track of student learning progress. This project offers medical students a safe method to develop intensive neurological monitoring skills for clinical assessment of patients with neurological disorders.

Keywords: neurology, intracranial pressure, medical education, simulation

Procedia PDF Downloads 170
4416 Towards a Robust Patch Based Multi-View Stereo Technique for Textureless and Occluded 3D Reconstruction

Authors: Ben Haines, Li Bai

Abstract:

Patch based reconstruction methods have been and still are one of the top performing approaches to 3D reconstruction to date. Their local approach to refining the position and orientation of a patch, free of global minimisation and independent of surface smoothness, make patch based methods extremely powerful in recovering fine grained detail of an objects surface. However, patch based approaches still fail to faithfully reconstruct textureless or highly occluded surface regions thus though performing well under lab conditions, deteriorate in industrial or real world situations. They are also computationally expensive. Current patch based methods generate point clouds with holes in texturesless or occluded regions that require expensive energy minimisation techniques to fill and interpolate a high fidelity reconstruction. Such shortcomings hinder the adaptation of the methods for industrial applications where object surfaces are often highly textureless and the speed of reconstruction is an important factor. This paper presents on-going work towards a multi-resolution approach to address the problems, utilizing particle swarm optimisation to reconstruct high fidelity geometry, and increasing robustness to textureless features through an adapted approach to the normalised cross correlation. The work also aims to speed up the reconstruction using advances in GPU technologies and remove the need for costly initialization and expansion. Through the combination of these enhancements, it is the intention of this work to create denser patch clouds even in textureless regions within a reasonable time. Initial results show the potential of such an approach to construct denser point clouds with a comparable accuracy to that of the current top-performing algorithms.

Keywords: 3D reconstruction, multiview stereo, particle swarm optimisation, photo consistency

Procedia PDF Downloads 202
4415 Virtual Reality as a Method in Transformative Learning: A Strategy to Reduce Implicit Bias

Authors: Cory A. Logston

Abstract:

It is imperative researchers continue to explore every transformative strategy to increase empathy and awareness of racial bias. Racism is a social and political concept that uses stereotypical ideology to highlight racial inequities. Everyone has biases they may not be aware of toward disparate out-groups. There is some form of racism in every profession; doctors, lawyers, and teachers are not immune. There have been numerous successful and unsuccessful strategies to motivate and transform an individual’s unconscious biased attitudes. One method designed to induce a transformative experience and identify implicit bias is virtual reality (VR). VR is a technology designed to transport the user to a three-dimensional environment. In a virtual reality simulation, the viewer is immersed in a realistic interactive video taking on the perspective of a Black man. The viewer as the character experiences discrimination in various life circumstances growing up as a child into adulthood. For instance, the prejudice felt in school, as an adolescent encountering the police and false accusations in the workplace. Current research suggests that an immersive VR simulation can enhance self-awareness and become a transformative learning experience. This study uses virtual reality immersion and transformative learning theory to create empathy and identify any unintentional racial bias. Participants, White teachers, will experience a VR immersion to create awareness and identify implicit biases regarding Black students. The desired outcome provides a springboard to reconceptualize their own implicit bias. Virtual reality is gaining traction in the research world and promises to be an effective tool in the transformative learning process.

Keywords: empathy, implicit bias, transformative learning, virtual reality

Procedia PDF Downloads 193
4414 Lie Symmetry of a Nonlinear System Characterizing Endemic Malaria

Authors: Maba Boniface Matadi

Abstract:

This paper analyses the model of Malaria endemic from the point of view of the group theoretic approach. The study identified new independent variables that lead to the transformation of the nonlinear model. Furthermore, corresponding determining equations were constructed, and new symmetries were found. As a result, the findings of the study demonstrate of the integrability of the model to present an invariant solution for the Malaria model.

Keywords: group theory, lie symmetry, invariant solutions, malaria

Procedia PDF Downloads 108
4413 Markov Random Field-Based Segmentation Algorithm for Detection of Land Cover Changes Using Uninhabited Aerial Vehicle Synthetic Aperture Radar Polarimetric Images

Authors: Mehrnoosh Omati, Mahmod Reza Sahebi

Abstract:

The information on land use/land cover changing plays an essential role for environmental assessment, planning and management in regional development. Remotely sensed imagery is widely used for providing information in many change detection applications. Polarimetric Synthetic aperture radar (PolSAR) image, with the discrimination capability between different scattering mechanisms, is a powerful tool for environmental monitoring applications. This paper proposes a new boundary-based segmentation algorithm as a fundamental step for land cover change detection. In this method, first, two PolSAR images are segmented using integration of marker-controlled watershed algorithm and coupled Markov random field (MRF). Then, object-based classification is performed to determine changed/no changed image objects. Compared with pixel-based support vector machine (SVM) classifier, this novel segmentation algorithm significantly reduces the speckle effect in PolSAR images and improves the accuracy of binary classification in object-based level. The experimental results on Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) polarimetric images show a 3% and 6% improvement in overall accuracy and kappa coefficient, respectively. Also, the proposed method can correctly distinguish homogeneous image parcels.

Keywords: coupled Markov random field (MRF), environment, object-based analysis, polarimetric SAR (PolSAR) images

Procedia PDF Downloads 216
4412 Future Prospects of Female Journalists in Mass Media of Bangladesh

Authors: M. Nurus Safa, Jiang Jinzhang, Akter Tahera

Abstract:

This study explores the female are overcoming the odds and doing well as journalist during the last decade in Bangladesh. Female journalists are contributing to the society for economic prosperity and changing the attitude towards the development concept and process. But the path is not smooth for involving women in journalism. The findings are female journalist facing many barriers like family pressure, Society problem, pay-allowances, gender discrimination, sexual harassment and even lack of workplace. According to their skill and merit, they face problems in getting maternity leave and assignments. But their role in this sector cannot be neglected. It is possible to survive if have the passion, professionalism, and love on this profession. Day by day, the female participation in journalism sector is increasing in Bangladesh. Despite the barriers, female journalists are showing strong interest in journalism as a career. As much gender balance in Mass media as the women's freedom and scope will increase. As a result, the spread of female’s workplace in the media will spread. Good number of female journalists is working in different policy making positions of the organization. In future, experienced female journalists will be more because now day's they taking challenges and working religiously according to the company and public need. In recent time Bangladesh is encouraging her women to work outside of home. Currently, a significant change has come into the social attitude which represents by women’s advancement in journalism sector of Bangladesh. This study uses the survey method and 6 depth interview to find out a fruitful result. As a sampling, the study uses purposive sampling technique to collect the data from the 120 female respondents of television, online and print media journalists.

Keywords: attitude, Bangladesh, challenges, female journalists, prospects

Procedia PDF Downloads 217