Search results for: closed open source
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7918

Search results for: closed open source

7078 An Approach for Estimating Open Education Resources Textbook Savings: A Case Study

Authors: Anna Ching-Yu Wong

Abstract:

Introduction: Textbooks play a sizable portion of the overall cost of higher education students. It is a board consent that open education resources (OER) reduce the te4xtbook costs and provide students a way to receive high-quality learning materials at little or no cost to them. However, there is less agreement over exactly how much. This study presents an approach for calculating OER savings by using SUNY Canton NON-OER courses (N=233) to estimate the potentially textbook savings for one semester – Fall 2022. The purpose in collecting data is to understand how much potentially saved from using OER materials and to have a record for future further studies. Literature Reviews: In the past years, researchers identified the rising cost of textbooks disproportionately harm students in higher education institutions and how much an average cost of a textbook. For example, Nyamweya (2018) found that on average students save $116.94 per course when OER adopted in place of traditional commercial textbooks by using a simple formula. Student PIRGs (2015) used reports of per-course savings when transforming a course from using a commercial textbook to OER to reach an estimate of $100 average cost savings per course. Allen and Wiley (2016) presented at the 2016 Open Education Conference on multiple cost-savings studies and concluded $100 was reasonable per-course savings estimates. Ruth (2018) calculated an average cost of a textbook was $79.37 per-course. Hilton, et al (2014) conducted a study with seven community colleges across the nation and found the average textbook cost to be $90.61. There is less agreement over exactly how much would be saved by adopting an OER course. This study used SUNY Canton as a case study to create an approach for estimating OER savings. Methodology: Step one: Identify NON-OER courses from UcanWeb Class Schedule. Step two: View textbook lists for the classes (Campus bookstore prices). Step three: Calculate the average textbook prices by averaging the new book and used book prices. Step four: Multiply the average textbook prices with the number of students in the course. Findings: The result of this calculation was straightforward. The average of a traditional textbooks is $132.45. Students potentially saved $1,091,879.94. Conclusion: (1) The result confirms what we have known: Adopting OER in place of traditional textbooks and materials achieves significant savings for students, as well as the parents and taxpayers who support them through grants and loans. (2) The average textbook savings for adopting an OER course is variable depending on the size of the college and as well as the number of enrollment students.

Keywords: textbook savings, open textbooks, textbook costs assessment, open access

Procedia PDF Downloads 74
7077 Priming through Open Book MCQ Test: A Tool for Enhancing Learning in Medical Undergraduates

Authors: Bharti Bhandari, Bharati Mehta, Sabyasachi Sircar

Abstract:

Medical education is advancing in India, with its advancement newer innovations are being incorporated in teaching and assessment methodology. Our study focusses on a teaching innovation that is more student-centric than teacher-centric and is the need of the day. The teaching innovation was carried out in 1st year MBBS students of our institute. Students were assigned control and test groups. Priming was done for the students in the test group with an open-book MCQ based test in a particular topic before delivering formal didactic lecture on that topic. The control group was not assigned any such exercise. This was followed by formal didactic lecture on the same topic. Thereafter, both groups were assessed on the same topic. The marks were compiled and analysed using appropriate statistical tests. Students were also given questionnaire to elicit their views on the benefits of “self-priming”. The mean marks scored in theory assessment by the test group were statistically higher than the marks scored by the controls. According to students’ feedback, the ‘self-priming “process was interesting, helped in better orientation during class-room lectures and better understanding of the topic. They want it to be repeated for other topics with moderate difficulty level. Better performance of the students in the primed group validates the combination of student-centric priming model and didactic lecture as superior to the conventional, teacher-centric methods alone. If this system is successfully followed, the present teacher-centric pedagogy should increasingly give way to student-centric activities where the teacher is only a facilitator.

Keywords: medical education, open-book test, pedagogy, priming

Procedia PDF Downloads 443
7076 Ontological Modeling Approach for Statistical Databases Publication in Linked Open Data

Authors: Bourama Mane, Ibrahima Fall, Mamadou Samba Camara, Alassane Bah

Abstract:

At the level of the National Statistical Institutes, there is a large volume of data which is generally in a format which conditions the method of publication of the information they contain. Each household or business data collection project includes a dissemination platform for its implementation. Thus, these dissemination methods previously used, do not promote rapid access to information and especially does not offer the option of being able to link data for in-depth processing. In this paper, we present an approach to modeling these data to publish them in a format intended for the Semantic Web. Our objective is to be able to publish all this data in a single platform and offer the option to link with other external data sources. An application of the approach will be made on data from major national surveys such as the one on employment, poverty, child labor and the general census of the population of Senegal.

Keywords: Semantic Web, linked open data, database, statistic

Procedia PDF Downloads 174
7075 An Overview on the Effectiveness of Brand Mascot and Celebrity Endorsement

Authors: Isari Pairoa, Proud Arunrangsiwed

Abstract:

Celebrity and brand mascot endorsement have been explored for more than three decades. Both endorsers can effectively transfer their reputation to corporate image and can influence the customers to purchase the product. However, there was little known about the mediators between the level of endorsement and its effect on buying behavior. The objective of the current study is to identify the gab of the previous studies and to seek possible mediators. It was found that consumer’s memory and identification are the mediators, of source credibility and endorsement effect. A future study should confirm the model of endorsement, which was established in the current study.

Keywords: product endorsement, memory, identification theory, source credibility, unintentional effect

Procedia PDF Downloads 227
7074 Comparison of the Proprioception Sense and Standing Balance in Patients with Osteoarthritis Before and After Total Knee Arthroplasty Surgery

Authors: S. Daneshi, G. Shahcheraghi, F. Ghaffarinejad

Abstract:

Back ground: Osteoarthritis (OA) is the most common form of arthritis, affecting millions of people around the world during the aging process. Knee joint proprioception sense decrease with OA and Total Knee Arthroplasty (TKA) surgery may affect them. We investigated two parameters of proprioception sense (the joint position sense and kinesthesia) and standing balance in affected limbs before and after TKA, in patient with Knee OA. Methods and Materials: In this Analytic study, 10 patients who were candidate for TKA during two months in Dena Hospital of Shiraz, selected for further analysis. All of cases were female in range of 55-70 years old. Participants assessed before and two weeks after TKA using three instruments: electrogoniometer and continuous passive motion (CPM) to assess Knee joint position sense and kinesthesia in 20 and 45 degrees; and chronometer to assess duration of standing balance on affected leg with open and closed eyes. Results: To examine differences between before and after of TKA scorings Willcoxon Signed Rank and Mann-Whitney was performed which indicated no significant differences between knee joint position sense and kinesthesia in 20 and 45 degrees (P>0.05) and no significant differences between Standing Balance in a patient with knee OA before and after TKA (P>0.05). Conclusion: The study indicates that, OA can affect proprioception sense and standing balance but TKA doesn’t have any effect on these parameters. Intra articular structures such as cruciate ligaments and mines are responsible for proprioception sense in normal knee joint. Since in severe knee OA the number of mechanoreceptors in these intra articular structures decrease and their function reduce more than normal knee joint, so the anterior cruciate ligaments (ACL) become defected, thus after TKA surgery which this ligament is removed no significant change was found in proprioception sense. As a result of involving proprioception sense, muscles strength and the function of vestibular system in balance, standing balance did not show significant difference before and after TKA.

Keywords: knee joint, proprioception sense, standing balance, rehabilitation sciences

Procedia PDF Downloads 378
7073 Numerical Simulations of Acoustic Imaging in Hydrodynamic Tunnel with Model Adaptation and Boundary Layer Noise Reduction

Authors: Sylvain Amailland, Jean-Hugh Thomas, Charles Pézerat, Romuald Boucheron, Jean-Claude Pascal

Abstract:

The noise requirements for naval and research vessels have seen an increasing demand for quieter ships in order to fulfil current regulations and to reduce the effects on marine life. Hence, new methods dedicated to the characterization of propeller noise, which is the main source of noise in the far-field, are needed. The study of cavitating propellers in closed-section is interesting for analyzing hydrodynamic performance but could involve significant difficulties for hydroacoustic study, especially due to reverberation and boundary layer noise in the tunnel. The aim of this paper is to present a numerical methodology for the identification of hydroacoustic sources on marine propellers using hydrophone arrays in a large hydrodynamic tunnel. The main difficulties are linked to the reverberation of the tunnel and the boundary layer noise that strongly reduce the signal-to-noise ratio. In this paper it is proposed to estimate the reflection coefficients using an inverse method and some reference transfer functions measured in the tunnel. This approach allows to reduce the uncertainties of the propagation model used in the inverse problem. In order to reduce the boundary layer noise, a cleaning algorithm taking advantage of the low rank and sparse structure of the cross-spectrum matrices of the acoustic and the boundary layer noise is presented. This approach allows to recover the acoustic signal even well under the boundary layer noise. The improvement brought by this method is visible on acoustic maps resulting from beamforming and DAMAS algorithms.

Keywords: acoustic imaging, boundary layer noise denoising, inverse problems, model adaptation

Procedia PDF Downloads 332
7072 Radiosensitization Properties of Gold Nanoparticles in Brachytherapy of Uterus Cancer by High Dose Rate I-125 Seed: A Simulation Study by MCNPX and MCNP6 Codes

Authors: Elham Mansouri, Asghar Mesbahi

Abstract:

Purpose: In the current study, we aimed to investigate the macroscopic and microscopic dose enhancement effect of metallic nanoparticles in interstitial brachytherapy of uterus cancer by Iodin-125 source using a nano-lattice model in MCNPX (5) and MCNP6.1 codes. Materials and methods: Based on a nano-lattice simulation model containing a radiation source and a tumor tissue with cellular compartments loaded with 7mg/g spherical nanoparticles (bismuth, gold, and gadolinium), the energy deposited by the secondary electrons in microscopic and macroscopic level was estimated. Results: The results show that the values of macroscopic DEF is higher than microscopic DEF values and the macroscopic DEF values decreases as a function of distance from the brachytherapy source surface. Also, the results revealed a remarkable discrepancy between the DEF and secondary electron spectra calculated by MCNPX (5) and MCNP6.1 codes, which could be justified by the difference in energy cut-off and electron transport algorithms of two codes. Conclusion: According to the both MCNPX (5) and MCNP6.1 outputs, it could be concluded that the presence of metallic nanoparticles in the tumor tissue of uteruscancer increases the physical effectiveness of brachytherapy by I-125 source. The results presented herein give a physical view of radiosensitization potential of different metallic nanoparticles and could be considered in design of analytical and experimental radiosensitization studies in tumor regions using various radiotherapy modalities in the presence of heavy nanomaterials.

Keywords: MCNPX, MCNP6, nanoparticle, brachytherapy

Procedia PDF Downloads 101
7071 An Intelligence-Led Methodologly for Detecting Dark Actors in Human Trafficking Networks

Authors: Andrew D. Henshaw, James M. Austin

Abstract:

Introduction: Human trafficking is an increasingly serious transnational criminal enterprise and social security issue. Despite ongoing efforts to mitigate the phenomenon and a significant expansion of security scrutiny over past decades, it is not receding. This is true for many nations in Southeast Asia, widely recognized as the global hub for trafficked persons, including men, women, and children. Clearly, human trafficking is difficult to address because there are numerous drivers, causes, and motivators for it to persist, such as non-military and non-traditional security challenges, i.e., climate change, global warming displacement, and natural disasters. These make displaced persons and refugees particularly vulnerable. The issue is so large conservative estimates put a dollar value at around $150 billion-plus per year (Niethammer, 2020) spanning sexual slavery and exploitation, forced labor, construction, mining and in conflict roles, and forced marriages of girls and women. Coupled with corruption throughout military, police, and civil authorities around the world, and the active hands of powerful transnational criminal organizations, it is likely that such figures are grossly underestimated as human trafficking is misreported, under-detected, and deliberately obfuscated to protect those profiting from it. For example, the 2022 UN report on human trafficking shows a 56% reduction in convictions in that year alone (UNODC, 2022). Our Approach: To better understand this, our research utilizes a bespoke methodology. Applying a JAM (Juxtaposition Assessment Matrix), which we previously developed to detect flows of dark money around the globe (Henshaw, A & Austin, J, 2021), we now focus on the human trafficking paradigm. Indeed, utilizing a JAM methodology has identified key indicators of human trafficking not previously explored in depth. Being a set of structured analytical techniques that provide panoramic interpretations of the subject matter, this iteration of the JAM further incorporates behavioral and driver indicators, including the employment of Open-Source Artificial Intelligence (OS-AI) across multiple collection points. The extracted behavioral data was then applied to identify non-traditional indicators as they contribute to human trafficking. Furthermore, as the JAM OS-AI analyses data from the inverted position, i.e., the viewpoint of the traffickers, it examines the behavioral and physical traits required to succeed. This transposed examination of the requirements of success delivers potential leverage points for exploitation in the fight against human trafficking in a new and novel way. Findings: Our approach identified new innovative datasets that have previously been overlooked or, at best, undervalued. For example, the JAM OS-AI approach identified critical 'dark agent' lynchpins within human trafficking that are difficult to detect and harder to connect to actors and agents within a network. Our preliminary data suggests this is in part due to the fact that ‘dark agents’ in extant research have been difficult to detect and potentially much harder to directly connect to the actors and organizations in human trafficking networks. Our research demonstrates that using new investigative techniques such as OS-AI-aided JAM introduces a powerful toolset to increase understanding of human trafficking and transnational crime and illuminate networks that, to date, avoid global law enforcement scrutiny.

Keywords: human trafficking, open-source intelligence, transnational crime, human security, international human rights, intelligence analysis, JAM OS-AI, Dark Money

Procedia PDF Downloads 90
7070 A Brain Controlled Robotic Gait Trainer for Neurorehabilitation

Authors: Qazi Umer Jamil, Abubakr Siddique, Mubeen Ur Rehman, Nida Aziz, Mohsin I. Tiwana

Abstract:

This paper discusses a brain controlled robotic gait trainer for neurorehabilitation of Spinal Cord Injury (SCI) patients. Patients suffering from Spinal Cord Injuries (SCI) become unable to execute motion control of their lower proximities due to degeneration of spinal cord neurons. The presented approach can help SCI patients in neuro-rehabilitation training by directly translating patient motor imagery into walkers motion commands and thus bypassing spinal cord neurons completely. A non-invasive EEG based brain-computer interface is used for capturing patient neural activity. For signal processing and classification, an open source software (OpenVibe) is used. Classifiers categorize the patient motor imagery (MI) into a specific set of commands that are further translated into walker motion commands. The robotic walker also employs fall detection for ensuring safety of patient during gait training and can act as a support for SCI patients. The gait trainer is tested with subjects, and satisfactory results were achieved.

Keywords: brain computer interface (BCI), gait trainer, spinal cord injury (SCI), neurorehabilitation

Procedia PDF Downloads 159
7069 Aerosol Characterization in a Coastal Urban Area in Rimini, Italy

Authors: Dimitri Bacco, Arianna Trentini, Fabiana Scotto, Flavio Rovere, Daniele Foscoli, Cinzia Para, Paolo Veronesi, Silvia Sandrini, Claudia Zigola, Michela Comandini, Marilena Montalti, Marco Zamagni, Vanes Poluzzi

Abstract:

The Po Valley, in the north of Italy, is one of the most polluted areas in Europe. The air quality of the area is linked not only to anthropic activities but also to its geographical characteristics and stagnant weather conditions with frequent inversions, especially in the cold season. Even the coastal areas present high values of particulate matter (PM10 and PM2.5) because the area closed between the Adriatic Sea and the Apennines does not favor the dispersion of air pollutants. The aim of the present work was to identify the main sources of particulate matter in Rimini, a tourist city in northern Italy. Two sampling campaigns were carried out in 2018, one in winter (60 days) and one in summer (30 days), in 4 sites: an urban background, a city hotspot, a suburban background, and a rural background. The samples are characterized by the concentration of the ionic composition of the particulates and of the main a hydro-sugars, in particular levoglucosan, a marker of the biomass burning, because one of the most important anthropogenic sources in the area, both in the winter and surprisingly even in the summer, is the biomass burning. Furthermore, three sampling points were chosen in order to maximize the contribution of a specific biomass source: a point in a residential area (domestic cooking and domestic heating), a point in the agricultural area (weed fires), and a point in the tourist area (restaurant cooking). In these sites, the analyzes were enriched with the quantification of the carbonaceous component (organic and elemental carbon) and with measurement of the particle number concentration and aerosol size distribution (6 - 600 nm). The results showed a very significant impact of the combustion of biomass due to domestic heating in the winter period, even though many intense peaks were found attributable to episodic wood fires. In the summer season, however, an appreciable signal was measured linked to the combustion of biomass, although much less intense than in winter, attributable to domestic cooking activities. Further interesting results were the verification of the total absence of sea salt's contribution in the particulate with the lower diameter (PM2.5), and while in the PM10, the contribution becomes appreciable only in particular wind conditions (high wind from north, north-east). Finally, it is interesting to note that in a small town, like Rimini, in summer, the traffic source seems to be even more relevant than that measured in a much larger city (Bologna) due to tourism.

Keywords: aerosol, biomass burning, seacoast, urban area

Procedia PDF Downloads 125
7068 A Mixing Matrix Estimation Algorithm for Speech Signals under the Under-Determined Blind Source Separation Model

Authors: Jing Wu, Wei Lv, Yibing Li, Yuanfan You

Abstract:

The separation of speech signals has become a research hotspot in the field of signal processing in recent years. It has many applications and influences in teleconferencing, hearing aids, speech recognition of machines and so on. The sounds received are usually noisy. The issue of identifying the sounds of interest and obtaining clear sounds in such an environment becomes a problem worth exploring, that is, the problem of blind source separation. This paper focuses on the under-determined blind source separation (UBSS). Sparse component analysis is generally used for the problem of under-determined blind source separation. The method is mainly divided into two parts. Firstly, the clustering algorithm is used to estimate the mixing matrix according to the observed signals. Then the signal is separated based on the known mixing matrix. In this paper, the problem of mixing matrix estimation is studied. This paper proposes an improved algorithm to estimate the mixing matrix for speech signals in the UBSS model. The traditional potential algorithm is not accurate for the mixing matrix estimation, especially for low signal-to noise ratio (SNR).In response to this problem, this paper considers the idea of an improved potential function method to estimate the mixing matrix. The algorithm not only avoids the inuence of insufficient prior information in traditional clustering algorithm, but also improves the estimation accuracy of mixing matrix. This paper takes the mixing of four speech signals into two channels as an example. The results of simulations show that the approach in this paper not only improves the accuracy of estimation, but also applies to any mixing matrix.

Keywords: DBSCAN, potential function, speech signal, the UBSS model

Procedia PDF Downloads 133
7067 Vibration Transmission across Junctions of Walls and Floors in an Apartment Building: An Experimental Investigation

Authors: Hugo Sampaio Libero, Max de Castro Magalhaes

Abstract:

The perception of sound radiated from a building floor is greatly influenced by the rooms in which it is immersed and by the position of both listener and source. The main question that remains unanswered is related to the influence of the source position on the sound power radiated by a complex wall-floor system in buildings. This research is concerned with the investigation of vibration transmission across walls and floors in buildings. It is primarily based on the determination of vibration reduction index via experimental tests. Knowledge of this parameter may help in predicting noise and vibration propagation in building components. First, the physical mechanisms involving vibration transmission across structural junctions are described. An experimental setup is performed to aid this investigation. The experimental tests have shown that the vibration generation in the walls and floors is directed related to their size and boundary conditions. It is also shown that the vibration source position can affect the overall vibration spectrum significantly. Second, the characteristics of the noise spectra inside the rooms due to an impact source (tapping machine) are also presented. Conclusions are drawn for the general trend of vibration and noise spectrum of the structural components and rooms, respectively. In summary, the aim of this paper is to investigate the vibro-acoustical behavior of building floors and walls under floor impact excitation. The impact excitation was at distinct positions on the slab. The analysis has highlighted the main physical characteristics of the vibration transmission mechanism.

Keywords: vibration transmission, vibration reduction index, impact excitation, experimental tests

Procedia PDF Downloads 92
7066 Cost Overrun in Construction Projects

Authors: Hailu Kebede Bekele

Abstract:

Construction delays are suitable where project events occur at a certain time expected due to causes related to the client, consultant, and contractor. Delay is the major cause of the cost overrun that leads to the poor efficiency of the project. The cost difference between completion and the originally estimated is known as cost overrun. The common ways of cost overruns are not simple issues that can be neglected, but more attention should be given to prevent the organization from being devastated to be failed, and financial expenses to be extended. The reasons that may raised in different studies show that the problem may arise in construction projects due to errors in budgeting, lack of favorable weather conditions, inefficient machinery, and the availability of extravagance. The study is focused on the pace of mega projects that can have a significant change in the cost overrun calculation.15 mega projects are identified to study the problem of the cost overrun in the site. The contractor, consultant, and client are the principal stakeholders in the mega projects. 20 people from each sector were selected to participate in the investigation of the current mega construction project. The main objective of the study on the construction cost overrun is to prioritize the major causes of the cost overrun problem. The methodology that was employed in the construction cost overrun is the qualitative methodology that mostly rates the causes of construction project cost overrun. Interviews, open-ended and closed-ended questions group discussions, and rating qualitative methods are the best methodologies to study construction projects overrun. The result shows that design mistakes, lack of labor, payment delay, old equipment and scheduling, weather conditions, lack of skilled labor, payment delays, transportation, inflation, and order variations, market price fluctuation, and people's thoughts and philosophies, the prior cause of the cost overrun that fail the project performance. The institute shall follow the scheduled activities to bring a positive forward in the project life.

Keywords: cost overrun, delay, mega projects, design

Procedia PDF Downloads 61
7065 Households’ Willingness to Pay for Watershed Management Practices in Lake Hawassa Watershed, Southern Ethiopia

Authors: Mulugeta Fola, Mengistu Ketema, Kumilachew Alamerie

Abstract:

Watershed provides vast economic benefits within and beyond the management area of interest. But most watersheds in Ethiopia are increasingly facing the threats of degradation due to both natural and man-made causes. To reverse these problems, communities’ participation in sustainable management programs is among the necessary measures. Hence, this study assessed the households’ willingness to pay for watershed management practices through a contingent valuation study approach. Double bounded dichotomous choice with open-ended follow-up format was used to elicit the households’ willingness to pay. Based on data collected from 275 randomly selected households, descriptive statistics results indicated that most households (79.64%) were willing to pay for watershed management practices. A bivariate Probit model was employed to identify determinants of households’ willingness to pay and estimate mean willingness to pay. Its result shows that age, gender, income, livestock size, perception of watershed degradation, social position, and offered bids were important variables affecting willingness to pay for watershed management practices. The study also revealed that the mean willingness to pay for watershed management practices was calculated to be 58.41 Birr and 47.27 Birr per year from the double bounded and open-ended format, respectively. The study revealed that the aggregate welfare gains from watershed management practices were calculated to be 931581.09 Birr and 753909.23 Birr per year from double bounded dichotomous choice and open-ended format, respectively. Therefore, the policymakers should make households to pay for the services of watershed management practices in the study area.

Keywords: bivariate probit model, contingent valuation, watershed management practices, willingness to pay

Procedia PDF Downloads 222
7064 Assertion-Driven Test Repair Based on Priority Criteria

Authors: Ruilian Zhao, Shukai Zhang, Yan Wang, Weiwei Wang

Abstract:

Repairing broken test cases is an expensive and challenging task in evolving software systems. Although an automated repair technique with intent preservation has been proposed, but it does not take into account the association between test repairs and assertions, leading to a large number of irrelevant candidates and decreasing the repair capability. This paper proposes an assertion-driven test repair approach. Furthermore, an intent-oriented priority criterion is raised to guide the repair candidate generation, making the repairs closer to the intent of the test. In more detail, repair targets are determined through post-dominance relations between assertions and the methods that directly cause compilation errors. Then, test repairs are generated from the target in a bottom-up way, guided by the intent-oriented priority criteria. Finally, the generated repair candidates are prioritized to match the original test intent. The approach is implemented and evaluated on the benchmark of 4 open-source programs and 91 broken test cases. The result shows that the approach can fix 89% (81/91) of broken test cases, which is more effective than the existing intentpreserved test repair approach, and our intent-oriented priority criteria work well.

Keywords: test repair, test intent, software test, test case evolution

Procedia PDF Downloads 128
7063 Laparoscopic Proximal Gastrectomy in Gastroesophageal Junction Tumours

Authors: Ihab Saad Ahmed

Abstract:

Background For Siewert type I and II gastroesophageal junction tumor (GEJ) laparoscopic proximal gastrectomy can be performed. It is associated with several perioperative benefits compared with open proximal gastrectomy. The use of laparoscopic proximal gastrectomy (LPG) has become an increasingly popular approach for select tumors Methods We describe our technique for LPG, including the preoperative work-up, illustrated images of the main principle steps of the surgery, and our postoperative course. Results Thirteen pts (nine males, four female) with type I, II (GEJ) adenocarcinoma had laparoscopic radical proximal gastrectomy and D2 lymphadenectomy. All of our patient received neoadjuvant chemotherapy, eleven patients had intrathoracic anastomosis through mini thoracotomy (two hand sewn end to end anastomoses and the other 9 patient end to side using circular stapler), two patients with intrathoracic anastomosis had flap and wrap technique, two patients had thoracoscopic esophageal and mediastinal lymph node dissection with cervical anastomosis The mean blood loss 80ml, no cases were converted to open. The mean operative time 250 minute Average LN retrieved 19-25, No sever complication such as leakage, stenosis, pancreatic fistula ,or intra-abdominal abscess were reported. Only One patient presented with empyema 1.5 month after discharge that was managed conservatively. Conclusion For carefully selected patients, LPG in GEJ tumour type I and II is a safe and reasonable alternative for open technique , which is associated with similar oncologic outcomes and low morbidity. It showed less blood loss, respiratory infections, with similar 1- and 3-year survival rates.

Keywords: LPG(laparoscopic proximal gastrectomy, GEJ( gastroesophageal junction tumour), d2 lymphadenectomy, neoadjuvant cth

Procedia PDF Downloads 123
7062 Subclasses of Bi-Univalent Functions Associated with Hohlov Operator

Authors: Rashidah Omar, Suzeini Abdul Halim, Aini Janteng

Abstract:

The coefficients estimate problem for Taylor-Maclaurin series is still an open problem especially for a function in the subclass of bi-univalent functions. A function f ϵ A is said to be bi-univalent in the open unit disk D if both f and f-1 are univalent in D. The symbol A denotes the class of all analytic functions f in D and it is normalized by the conditions f(0) = f’(0) – 1=0. The class of bi-univalent is denoted by  The subordination concept is used in determining second and third Taylor-Maclaurin coefficients. The upper bound for second and third coefficients is estimated for functions in the subclasses of bi-univalent functions which are subordinated to the function φ. An analytic function f is subordinate to an analytic function g if there is an analytic function w defined on D with w(0) = 0 and |w(z)| < 1 satisfying f(z) = g[w(z)]. In this paper, two subclasses of bi-univalent functions associated with Hohlov operator are introduced. The bound for second and third coefficients of functions in these subclasses is determined using subordination. The findings would generalize the previous related works of several earlier authors.

Keywords: analytic functions, bi-univalent functions, Hohlov operator, subordination

Procedia PDF Downloads 290
7061 Predicting Open Chromatin Regions in Cell-Free DNA Whole Genome Sequencing Data by Correlation Clustering  

Authors: Fahimeh Palizban, Farshad Noravesh, Amir Hossein Saeidian, Mahya Mehrmohamadi

Abstract:

In the recent decade, the emergence of liquid biopsy has significantly improved cancer monitoring and detection. Dying cells, including those originating from tumors, shed their DNA into the blood and contribute to a pool of circulating fragments called cell-free DNA. Accordingly, identifying the tissue origin of these DNA fragments from the plasma can result in more accurate and fast disease diagnosis and precise treatment protocols. Open chromatin regions are important epigenetic features of DNA that reflect cell types of origin. Profiling these features by DNase-seq, ATAC-seq, and histone ChIP-seq provides insights into tissue-specific and disease-specific regulatory mechanisms. There have been several studies in the area of cancer liquid biopsy that integrate distinct genomic and epigenomic features for early cancer detection along with tissue of origin detection. However, multimodal analysis requires several types of experiments to cover the genomic and epigenomic aspects of a single sample, which will lead to a huge amount of cost and time. To overcome these limitations, the idea of predicting OCRs from WGS is of particular importance. In this regard, we proposed a computational approach to target the prediction of open chromatin regions as an important epigenetic feature from cell-free DNA whole genome sequence data. To fulfill this objective, local sequencing depth will be fed to our proposed algorithm and the prediction of the most probable open chromatin regions from whole genome sequencing data can be carried out. Our method integrates the signal processing method with sequencing depth data and includes count normalization, Discrete Fourie Transform conversion, graph construction, graph cut optimization by linear programming, and clustering. To validate the proposed method, we compared the output of the clustering (open chromatin region+, open chromatin region-) with previously validated open chromatin regions related to human blood samples of the ATAC-DB database. The percentage of overlap between predicted open chromatin regions and the experimentally validated regions obtained by ATAC-seq in ATAC-DB is greater than 67%, which indicates meaningful prediction. As it is evident, OCRs are mostly located in the transcription start sites (TSS) of the genes. In this regard, we compared the concordance between the predicted OCRs and the human genes TSS regions obtained from refTSS and it showed proper accordance around 52.04% and ~78% with all and the housekeeping genes, respectively. Accurately detecting open chromatin regions from plasma cell-free DNA-seq data is a very challenging computational problem due to the existence of several confounding factors, such as technical and biological variations. Although this approach is in its infancy, there has already been an attempt to apply it, which leads to a tool named OCRDetector with some restrictions like the need for highly depth cfDNA WGS data, prior information about OCRs distribution, and considering multiple features. However, we implemented a graph signal clustering based on a single depth feature in an unsupervised learning manner that resulted in faster performance and decent accuracy. Overall, we tried to investigate the epigenomic pattern of a cell-free DNA sample from a new computational perspective that can be used along with other tools to investigate genetic and epigenetic aspects of a single whole genome sequencing data for efficient liquid biopsy-related analysis.

Keywords: open chromatin regions, cancer, cell-free DNA, epigenomics, graph signal processing, correlation clustering

Procedia PDF Downloads 148
7060 Status Report of the GERDA Phase II Startup

Authors: Valerio D’Andrea

Abstract:

The GERmanium Detector Array (GERDA) experiment, located at the Laboratori Nazionali del Gran Sasso (LNGS) of INFN, searches for 0νββ of 76Ge. Germanium diodes enriched to ∼ 86 % in the double beta emitter 76Ge(enrGe) are exposed being both source and detectors of 0νββ decay. Neutrinoless double beta decay is considered a powerful probe to address still open issues in the neutrino sector of the (beyond) Standard Model of particle Physics. Since 2013, just after the completion of the first part of its experimental program (Phase I), the GERDA setup has been upgraded to perform its next step in the 0νββ searches (Phase II). Phase II aims to reach a sensitivity to the 0νββ decay half-life larger than 1026 yr in about 3 years of physics data taking. This exposing a detector mass of about 35 kg of enrGe and with a background index of about 10^−3 cts/(keV·kg·yr). One of the main new implementations is the liquid argon scintillation light read-out, to veto those events that only partially deposit their energy both in Ge and in the surrounding LAr. In this paper, the GERDA Phase II expected goals, the upgrade work and few selected features from the 2015 commissioning and 2016 calibration runs will be presented. The main Phase I achievements will be also reviewed.

Keywords: gerda, double beta decay, LNGS, germanium

Procedia PDF Downloads 367
7059 Efficient Pre-Processing of Single-Cell Assay for Transposase Accessible Chromatin with High-Throughput Sequencing Data

Authors: Fan Gao, Lior Pachter

Abstract:

The primary tool currently used to pre-process 10X Chromium single-cell ATAC-seq data is Cell Ranger, which can take very long to run on standard datasets. To facilitate rapid pre-processing that enables reproducible workflows, we present a suite of tools called scATAK for pre-processing single-cell ATAC-seq data that is 15 to 18 times faster than Cell Ranger on mouse and human samples. Our tool can also calculate chromatin interaction potential matrices, and generate open chromatin signal and interaction traces for cell groups. We use scATAK tool to explore the chromatin regulatory landscape of a healthy adult human brain and unveil cell-type specific features, and show that it provides a convenient and computational efficient approach for pre-processing single-cell ATAC-seq data.

Keywords: single-cell, ATAC-seq, bioinformatics, open chromatin landscape, chromatin interactome

Procedia PDF Downloads 155
7058 The Expanding Role of Islamic Law in the Current Indonesian Legal Reform

Authors: Muhammad Ilham Agus Salim, Saufa Ata Taqiyya

Abstract:

In many Muslim countries, secularization has successfully reduced the role of Islamic law as a formal legal source during this last century. The most obvious fact was the reform of Daulah Utsmaniyah to be Secular Republic of Turkey. Religion is strictly separated from the state authorities in many countries today. But these last decades in Indonesia, a remarkable fact is apparent. Islamic law has expanded its role in Indonesian legal system, especially in districts regulations. In Aceh province, as a case in point, shariah has been the basic source of law in all regulations. There are more provinces in Indonesia which adopted Islamic law as a formal legal source by the end of 2014. Different from some other countries which clearly stipulates the status of Islam in formal ways, Indonesian constitution formally does not render any recognition for Islam to be the formal religion of the state. But in this Muslim majority country, Islamic law takes a place in democratic way, namely on the basis of the voice of majority. This paper will analyze how this reality increases significantly since what so called by Indonesian reformation era (end of nineties). Some causes will be identified regarding this tendency of expansion of role. Some lessons learned also will be recommended as the concluding remarks by the end of the paper.

Keywords: Islamic law, Indonesia, legal reform, Syariah local regulation

Procedia PDF Downloads 349
7057 Development and Usability Assessment of a Connected Resistance Exercise Band Application for Strength-Monitoring

Authors: J. A. Batsis, G. G. Boateng, L. M. Seo, C. L. Petersen, K. L. Fortuna, E. V. Wechsler, R. J. Peterson, S. B. Cook, D. Pidgeon, R. S. Dokko, R. J. Halter, D. F. Kotz

Abstract:

Resistance exercise bands are a core component of any physical activity strengthening program. Strength training can mitigate the development of sarcopenia, the loss of muscle mass or strength and function with aging. Yet, the adherence of such behavioral exercise strategies in a home-based setting are fraught with issues of monitoring and compliance. Our group developed a Bluetooth-enabled resistance exercise band capable of transmitting data to an open-source platform. In this work, we developed an application to capture this information in real-time, and conducted three usability studies in two mixed-aged groups of participants (n=6 each) and a group of older adults with obesity participating in a weight-loss intervention (n=20). The system was favorable, acceptable and provided iterative information that could assist in future deployment on ubiquitous platforms. Our formative work provides the foundation to deliver home-based monitoring interventions in a high-risk, older adult population.

Keywords: application, mHealth, older adult, resistance exercise band, sarcopenia

Procedia PDF Downloads 172
7056 Corrosion Characterization of ZA-27 Metal Matrix Composites

Authors: H. V. Jayaprakash, P. V. Krupakara

Abstract:

This paper deals with the high corrosion resistance developed by the metal matrix composites when compared with that of matrix alloy by open circuit potential test. Matrix selected is ZA-27 and reinforcement selected is red mud particulates, which is a ceramic material. The composites are prepared using liquid melt metallurgy technique using vortex method. Preheated but uncoated red mud particulates are added to the melt. Metal matrix composites containing 2, 4 and 6 weight percentage of red mud are casted. Matrix was also casted in the same way for comparison. Specimen are fabricated according to ASTM standards. The corrodents used for the tests were 0.025, 0.05 and 0.1 molar sodium hydroxide solutions. They are subjected to Open Circuit Potential studies and weight loss corrosion tests. Corrosion rate was found to be decreased with increase in exposure time in both experiments. Effect of exposure time and presence of increased percentage of reinforcement red mud is discussed in detail.

Keywords: composites, vortex, particulates, red mud

Procedia PDF Downloads 447
7055 Deep Learning Approaches for Accurate Detection of Epileptic Seizures from Electroencephalogram Data

Authors: Ramzi Rihane, Yassine Benayed

Abstract:

Epilepsy is a chronic neurological disorder characterized by recurrent, unprovoked seizures resulting from abnormal electrical activity in the brain. Timely and accurate detection of these seizures is essential for improving patient care. In this study, we leverage the UK Bonn University open-source EEG dataset and employ advanced deep-learning techniques to automate the detection of epileptic seizures. By extracting key features from both time and frequency domains, as well as Spectrogram features, we enhance the performance of various deep learning models. Our investigation includes architectures such as Long Short-Term Memory (LSTM), Bidirectional LSTM (Bi-LSTM), 1D Convolutional Neural Networks (1D-CNN), and hybrid CNN-LSTM and CNN-BiLSTM models. The models achieved impressive accuracies: LSTM (98.52%), Bi-LSTM (98.61%), CNN-LSTM (98.91%), CNN-BiLSTM (98.83%), and CNN (98.73%). Additionally, we utilized a data augmentation technique called SMOTE, which yielded the following results: CNN (97.36%), LSTM (97.01%), Bi-LSTM (97.23%), CNN-LSTM (97.45%), and CNN-BiLSTM (97.34%). These findings demonstrate the effectiveness of deep learning in capturing complex patterns in EEG signals, providing a reliable and scalable solution for real-time seizure detection in clinical environments.

Keywords: electroencephalogram, epileptic seizure, deep learning, LSTM, CNN, BI-LSTM, seizure detection

Procedia PDF Downloads 10
7054 Open Channel Flow Measurement of Water by Using Width Contraction

Authors: Arun Goel, D. V. S. Verma, Sanjeev Sangwan

Abstract:

The present study was aimed to develop a discharge measuring device for irrigation and laboratory channels. Experiments were conducted on a sharp edged constricted flow meters having four types of width constrictions namely 2:1, 1.5:1, 1:1, and 90o in the direction of flow. These devices were made of MS sheets and installed separately in a rectangular flume. All these four devices were tested under free and submerged flow conditions. Eight different discharges varying from 2 lit/sec to 30 lit/sec were passed through each device. In total around 500 observations of upstream and downstream depths were taken in the present work. For each discharge, free submerged and critical submergence under different flow conditions were noted and plotted. Once the upstream and downstream depths of flow over any of the device are known, the discharge can be easily calculated with the help of the curves developed for free and submerged flow conditions. The device having contraction 2:1 is the most efficient one as it allows maximum critical submergence.

Keywords: flowrate, flowmeter, open channels, submergence

Procedia PDF Downloads 430
7053 Factors Affecting Women's Participation in Social, Political and Economic Decision-Making Positions at Kelemwollega Zone, Western Ethiopia

Authors: T. Aragaw, P. Gari

Abstract:

In spite of social, political, and economic marginalization, women are still considered as the backbone of Ethiopia, one of the least developed countries in the world. The general purpose of this study was to assess factors that affect participation of women in politics, social and economic decisions at Kelem-Wollega Administrative Zone of Oromia Regional State, Ethiopia. Data used in this paper is mainly primary, and a few secondary data were incorporated. Respondents were selected using a systematic random sampling method and were placed questionnaires containing open-ended and closed-ended. Focus group discussion was also used for the study subjects in two offices. According to the information collected from the KWAZ Development and Social Service Office, a total of 18,473 tax-paying employees are present in the Zone, which is 14% of the total population of the Zone. Among the total number of employees in the Zone, 2,617 have been recruited for this study based on the criteria stated. This showed 1.8% of them were comprised of several churches and religious owned integrated development projects in the KWAZ. The 2,103 (80.34%) study participants responded personally, and they completed and returned the questionnaire to the researchers. The study revealed that in public institutions existed in KWAZ, the majority of women were having an educational status of diploma and lower, practicing lower non-decision making and leadership positions. Conclusion: Major barriers hindering women include: Socio-cultural attitudes, lack of necessary experience, education, the burden of domestic responsibilities, and lack of role models of women leaders in the Zone. Empowerment of women via social organizations, critical involvement of the government, and Affirmative action for women is critical. Further research is needed on the scope and challenges in implementing the strategies.

Keywords: women, affirmative action, leadership, empowerment, Ethiopia

Procedia PDF Downloads 196
7052 Extraction of Forest Plantation Resources in Selected Forest of San Manuel, Pangasinan, Philippines Using LiDAR Data for Forest Status Assessment

Authors: Mark Joseph Quinto, Roan Beronilla, Guiller Damian, Eliza Camaso, Ronaldo Alberto

Abstract:

Forest inventories are essential to assess the composition, structure and distribution of forest vegetation that can be used as baseline information for management decisions. Classical forest inventory is labor intensive and time-consuming and sometimes even dangerous. The use of Light Detection and Ranging (LiDAR) in forest inventory would improve and overcome these restrictions. This study was conducted to determine the possibility of using LiDAR derived data in extracting high accuracy forest biophysical parameters and as a non-destructive method for forest status analysis of San Manual, Pangasinan. Forest resources extraction was carried out using LAS tools, GIS, Envi and .bat scripts with the available LiDAR data. The process includes the generation of derivatives such as Digital Terrain Model (DTM), Canopy Height Model (CHM) and Canopy Cover Model (CCM) in .bat scripts followed by the generation of 17 composite bands to be used in the extraction of forest classification covers using ENVI 4.8 and GIS software. The Diameter in Breast Height (DBH), Above Ground Biomass (AGB) and Carbon Stock (CS) were estimated for each classified forest cover and Tree Count Extraction was carried out using GIS. Subsequently, field validation was conducted for accuracy assessment. Results showed that the forest of San Manuel has 73% Forest Cover, which is relatively much higher as compared to the 10% canopy cover requirement. On the extracted canopy height, 80% of the tree’s height ranges from 12 m to 17 m. CS of the three forest covers based on the AGB were: 20819.59 kg/20x20 m for closed broadleaf, 8609.82 kg/20x20 m for broadleaf plantation and 15545.57 kg/20x20m for open broadleaf. Average tree counts for the tree forest plantation was 413 trees/ha. As such, the forest of San Manuel has high percent forest cover and high CS.

Keywords: carbon stock, forest inventory, LiDAR, tree count

Procedia PDF Downloads 387
7051 Effect of Rice Cultivars and Water Regimes Application as Mitigation Strategy for Greenhouse Gases in Paddy Fields

Authors: Mthiyane Pretty, Mitsui Toshiake, Aycan Murat, Nagano Hirohiko

Abstract:

Methane (CH₄) is one of the most dangerous greenhouse gases (GHG) emitted into the atmosphere by terrestrial ecosystems, with a global warming potential (GWP) 25-34 times that of CO2 on a centennial scale. Paddy rice cultivations are a major source of methane emission and is the major driving force for climate change. Thus, it is necessary to find out GHG emissions mitigation strategies from rice cultivation. A study was conducted at Niigata University. And the prime objective of this research was to determine the effects of rice varieties CH4 lowland (NU1, YNU, Nipponbare, Koshihikari) and upland (Norin 1, Norin 24, Hitachihatamochi) japonica rice varieties using different growth media which was paddy field soil and artificial soil. The treatments were laid out in a split plot design. The soil moisture was kept at 40-50% and 70%, respectively. The CH₄ emission rates were determined by collecting air samples using the closed chamber technique and measuring CH₄ concentrations using a gas chromatograph. CH₄ emission rates varied with the growth, growth media type and development of the rice varieties. The soil moisture was monitored at a soil depth of 5–10 cm with an HydraGO portable soil sensor system every three days for each pot, and temperatures were be recorded by a sensitive thermometer. The lowest cumulative CH4 emission rate was observed in Norin 24, particularly under 40 to 50% soil moisture. Across the rice genotypes, 40-50% significantly reduced the cumulative CH4 , followed by irrigation of 70% soil moisture. During the tillering stage, no significant variation in tillering and plant height was observed between and 70% soil moisture. This study suggests that the cultivation of Norin 24 and Norin 1 under 70% soil irrigation could be effective at reducing the CH4 in rice fields.

Keywords: methane, paddy fields, rice varieties, soil moisture

Procedia PDF Downloads 92
7050 Comparison of Deep Convolutional Neural Networks Models for Plant Disease Identification

Authors: Megha Gupta, Nupur Prakash

Abstract:

Identification of plant diseases has been performed using machine learning and deep learning models on the datasets containing images of healthy and diseased plant leaves. The current study carries out an evaluation of some of the deep learning models based on convolutional neural network (CNN) architectures for identification of plant diseases. For this purpose, the publicly available New Plant Diseases Dataset, an augmented version of PlantVillage dataset, available on Kaggle platform, containing 87,900 images has been used. The dataset contained images of 26 diseases of 14 different plants and images of 12 healthy plants. The CNN models selected for the study presented in this paper are AlexNet, ZFNet, VGGNet (four models), GoogLeNet, and ResNet (three models). The selected models are trained using PyTorch, an open-source machine learning library, on Google Colaboratory. A comparative study has been carried out to analyze the high degree of accuracy achieved using these models. The highest test accuracy and F1-score of 99.59% and 0.996, respectively, were achieved by using GoogLeNet with Mini-batch momentum based gradient descent learning algorithm.

Keywords: comparative analysis, convolutional neural networks, deep learning, plant disease identification

Procedia PDF Downloads 196
7049 A Comparative Study of Twin Delayed Deep Deterministic Policy Gradient and Soft Actor-Critic Algorithms for Robot Exploration and Navigation in Unseen Environments

Authors: Romisaa Ali

Abstract:

This paper presents a comparison between twin-delayed Deep Deterministic Policy Gradient (TD3) and Soft Actor-Critic (SAC) reinforcement learning algorithms in the context of training robust navigation policies for Jackal robots. By leveraging an open-source framework and custom motion control environments, the study evaluates the performance, robustness, and transferability of the trained policies across a range of scenarios. The primary focus of the experiments is to assess the training process, the adaptability of the algorithms, and the robot’s ability to navigate in previously unseen environments. Moreover, the paper examines the influence of varying environmental complexities on the learning process and the generalization capabilities of the resulting policies. The results of this study aim to inform and guide the development of more efficient and practical reinforcement learning-based navigation policies for Jackal robots in real-world scenarios.

Keywords: Jackal robot environments, reinforcement learning, TD3, SAC, robust navigation, transferability, custom environment

Procedia PDF Downloads 100