Search results for: parallel algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4535

Search results for: parallel algorithm

605 The Dilemma of Translanguaging Pedagogy in a Multilingual University in South Africa

Authors: Zakhile Somlata

Abstract:

In the context of international linguistic and cultural diversity, all languages can be used for all purposes. Africa in general and South Africa, in particular, is not an exception to multilingual and multicultural society. The multilingual and multicultural nature of South African society has a direct bearing to the heterogeneity of South African Universities in general. Universities as the centers of research, innovation, and transformation of the entire society should be at the forefront in leading multilingualism. The universities in South Africa had been using English and to a certain extent Afrikaans as the only academic languages during colonialism and apartheid regime. The democratic breakthrough of 1994 brought linguistic relief in South Africa. The Constitution of the Republic of South Africa recognizes 11 official languages that should enjoy parity of esteem for the realization of multilingualism. The elevation of the nine previously marginalized indigenous African languages as academic languages in higher education is central to multilingualism. It is high time that Afrocentric model instead of Eurocentric model should be the one which underpins education system in South Africa at all levels. Almost all South African universities have their language policies that seek to promote access and success of students through multilingualism, but the main dilemma is the implementation of language policies. This study is significant to respond to two objectives: (i) To evaluate how selected institutions use language policies for accessibility and success of students. (ii) To study how selected universities integrate African languages for both academic and administrative purposes. This paper reflects the language policy practices in one selected University of Technology (UoT) in South Africa. The UoT has its own language policy which depicts linguistic diversity of the institution and its commitment to promote multilingualism. Translanguaging pedagogy which accommodates minority languages' usage in the teaching and learning process plays a pivotal role in promoting multilingualism. This research paper employs mixed methods (quantitative and qualitative research) approach. Qualitative data has been collected from the key informants (insiders and experts), while quantitative data has been collected from a cohort of third-year students. A mixed methods approach with its convergent parallel design allows the data to be collected separately, analysed separately but with the comparison of the results. Language development initiatives have been discussed within the framework of language policy and policy implementation strategies. Theoretically, this paper is rooted in language as a problem, language as a right and language as a resource. The findings demonstrate that despite being a multilingual institution, there is a perpetuation of marginalization of African languages to be used as academic languages. Findings further display the hegemony of English. The promotion of status quo compromises the promotion of multilingualism, Africanization of Higher Education and intellectualization of indigenous African languages in South Africa under a democratic dispensation.

Keywords: afro-centric model, hegemony of English, language as a resource, translanguaging pedagogy

Procedia PDF Downloads 163
604 Petrology, Geochemistry and Formation Conditions of Metaophiolites of the Loki Crystalline Massif (the Caucasus)

Authors: Irakli Gamkrelidze, David Shengelia, Tamara Tsutsunava, Giorgi Chichinadze, Giorgi Beridze, Ketevan Tedliashvili, Tamara Tsamalashvili

Abstract:

The Loki crystalline massif crops out in the Caucasian region and the geological retrospective represent the northern marginal part of the Baiburt-Sevanian terrain (island arc), bordering with the Paleotethys oceanic basin in the north. The pre-Alpine basement of the massif is built up of Lower-Middle Paleozoic metamorphic complex (metasedimentary and metabasite rocks), Upper Devonian quartz-diorites and Late Variscan granites. Earlier metamorphic complex was considered as an indivisible set including suites with different degree of metamorphism. Systematic geologic, petrologic and geochemical investigations of the massif’s rocks suggest the different conception on composition, structure and formation conditions of the massif. In particular, there are two main rock types in the Loki massif: the oldest autochthonous series of gneissic quartz-diorites and cutting them granites. The massif is flanked on its western side by a volcano-sedimentary sequence, metamorphosed to low-T facies. Petrologic, metamorphic and structural differences in this sequence prove the existence of a number of discrete units (overthrust sheets). One of them, the metabasic sheet represents the fragment of ophiolite complex. It comprises transition types of the second and third layers of the Paleooceanic crust: the upper noncumulated part of the third layer gabbro component and the following lowest part of the parallel diabase dykes of the second layer. The ophiolites are represented by metagabbros, metagabbro-diabases, metadiabases and amphibolite schists. According to the content of petrogenic components and additive elements in metabasites is stated that the protolith of metabasites belongs to petrochemical type of tholeiitic series of basalts. The parental magma of metaophiolites is of E-MORB composition, and by petrochemical parameters, it is very close to the composition of intraplate basalts. The dykes of hypabissal leucocratic siliceous and medium magmatic rocks associated with the metaophiolite sheet form the separate complex. They are granitoids with the extremely low content of CaO and quartz-diorite porphyries. According to various petrochemical parameters, these rocks have mixed characteristics. Their formation took place in spreading conditions or in the areas of manifestation of plumes most likely of island arc type. The metamorphism degree of the metaophiolites corresponds to a very low stage of green schist facies. The rocks of the metaophiolite complex are obducted from the Paleotethys Ocean. Geological and paleomagnetic data show that the primary location of the ocean is supposed to be to the north of the Loki crystalline massif.

Keywords: the Caucasus, crystalline massif, ophiolites, tectonic sheet

Procedia PDF Downloads 253
603 Reconstruction Spectral Reflectance Cube Based on Artificial Neural Network for Multispectral Imaging System

Authors: Iwan Cony Setiadi, Aulia M. T. Nasution

Abstract:

The multispectral imaging (MSI) technique has been used for skin analysis, especially for distant mapping of in-vivo skin chromophores by analyzing spectral data at each reflected image pixel. For ergonomic purpose, our multispectral imaging system is decomposed in two parts: a light source compartment based on LED with 11 different wavelenghts and a monochromatic 8-Bit CCD camera with C-Mount Objective Lens. The software based on GUI MATLAB to control the system was also developed. Our system provides 11 monoband images and is coupled with a software reconstructing hyperspectral cubes from these multispectral images. In this paper, we proposed a new method to build a hyperspectral reflectance cube based on artificial neural network algorithm. After preliminary corrections, a neural network is trained using the 32 natural color from X-Rite Color Checker Passport. The learning procedure involves acquisition, by a spectrophotometer. This neural network is then used to retrieve a megapixel multispectral cube between 380 and 880 nm with a 5 nm resolution from a low-spectral-resolution multispectral acquisition. As hyperspectral cubes contain spectra for each pixel; comparison should be done between the theoretical values from the spectrophotometer and the reconstructed spectrum. To evaluate the performance of reconstruction, we used the Goodness of Fit Coefficient (GFC) and Root Mean Squared Error (RMSE). To validate reconstruction, the set of 8 colour patches reconstructed by our MSI system and the one recorded by the spectrophotometer were compared. The average GFC was 0.9990 (standard deviation = 0.0010) and the average RMSE is 0.2167 (standard deviation = 0.064).

Keywords: multispectral imaging, reflectance cube, spectral reconstruction, artificial neural network

Procedia PDF Downloads 297
602 Evolutionary Swarm Robotics: Dynamic Subgoal-Based Path Formation and Task Allocation for Exploration and Navigation in Unknown Environments

Authors: Lavanya Ratnabala, Robinroy Peter, E. Y. A. Charles

Abstract:

This research paper addresses the challenges of exploration and navigation in unknown environments from an evolutionary swarm robotics perspective. Path formation plays a crucial role in enabling cooperative swarm robots to accomplish these tasks. The paper presents a method called the sub-goal-based path formation, which establishes a path between two different locations by exploiting visually connected sub-goals. Simulation experiments conducted in the Argos simulator demonstrate the successful formation of paths in the majority of trials. Furthermore, the paper tackles the problem of inter-collision (traffic) among a large number of robots engaged in path formation, which negatively impacts the performance of the sub-goal-based method. To mitigate this issue, a task allocation strategy is proposed, leveraging local communication protocols and light signal-based communication. The strategy evaluates the distance between points and determines the required number of robots for the path formation task, reducing unwanted exploration and traffic congestion. The performance of the sub-goal-based path formation and task allocation strategy is evaluated by comparing path length, time, and resource reduction against the A* algorithm. The simulation experiments demonstrate promising results, showcasing the scalability, robustness, and fault tolerance characteristics of the proposed approach.

Keywords: swarm, path formation, task allocation, Argos, exploration, navigation, sub-goal

Procedia PDF Downloads 19
601 Combining ASTER Thermal Data and Spatial-Based Insolation Model for Identification of Geothermal Active Areas

Authors: Khalid Hussein, Waleed Abdalati, Pakorn Petchprayoon, Khaula Alkaabi

Abstract:

In this study, we integrated ASTER thermal data with an area-based spatial insolation model to identify and delineate geothermally active areas in Yellowstone National Park (YNP). Two pairs of L1B ASTER day- and nighttime scenes were used to calculate land surface temperature. We employed the Emissivity Normalization Algorithm which separates temperature from emissivity to calculate surface temperature. We calculated the incoming solar radiation for the area covered by each of the four ASTER scenes using an insolation model and used this information to compute temperature due to solar radiation. We then identified the statistical thermal anomalies using land surface temperature and the residuals calculated from modeled temperatures and ASTER-derived surface temperatures. Areas that had temperatures or temperature residuals greater than 2σ and between 1σ and 2σ were considered ASTER-modeled thermal anomalies. The areas identified as thermal anomalies were in strong agreement with the thermal areas obtained from the YNP GIS database. Also the YNP hot springs and geysers were located within areas identified as anomalous thermal areas. The consistency between our results and known geothermally active areas indicate that thermal remote sensing data, integrated with a spatial-based insolation model, provides an effective means for identifying and locating areas of geothermal activities over large areas and rough terrain.

Keywords: thermal remote sensing, insolation model, land surface temperature, geothermal anomalies

Procedia PDF Downloads 339
600 Identification of Watershed Landscape Character Types in Middle Yangtze River within Wuhan Metropolitan Area

Authors: Huijie Wang, Bin Zhang

Abstract:

In China, the middle reaches of the Yangtze River are well-developed, boasting a wealth of different types of watershed landscape. In this regard, landscape character assessment (LCA) can serve as a basis for protection, management and planning of trans-regional watershed landscape types. For this study, we chose the middle reaches of the Yangtze River in Wuhan metropolitan area as our study site, wherein the water system consists of rich variety in landscape types. We analyzed trans-regional data to cluster and identify types of landscape characteristics at two levels. 55 basins were analyzed as variables with topography, land cover and river system features in order to identify the watershed landscape character types. For watershed landscape, drainage density and degree of curvature were specified as special variables to directly reflect the regional differences of river system features. Then, we used the principal component analysis (PCA) method and hierarchical clustering algorithm based on the geographic information system (GIS) and statistical products and services solution (SPSS) to obtain results for clusters of watershed landscape which were divided into 8 characteristic groups. These groups highlighted watershed landscape characteristics of different river systems as well as key landscape characteristics that can serve as a basis for targeted protection of watershed landscape characteristics, thus helping to rationally develop multi-value landscape resources and promote coordinated development of trans-regions.

Keywords: GIS, hierarchical clustering, landscape character, landscape typology, principal component analysis, watershed

Procedia PDF Downloads 178
599 High Aspect Ratio Micropillar Array Based Microfluidic Viscometer

Authors: Ahmet Erten, Adil Mustafa, Ayşenur Eser, Özlem Yalçın

Abstract:

We present a new viscometer based on a microfluidic chip with elastic high aspect ratio micropillar arrays. The displacement of pillar tips in flow direction can be used to analyze viscosity of liquid. In our work, Computational Fluid Dynamics (CFD) is used to analyze pillar displacement of various micropillar array configurations in flow direction at different viscosities. Following CFD optimization, micro-CNC based rapid prototyping is used to fabricate molds for microfluidic chips. Microfluidic chips are fabricated out of polydimethylsiloxane (PDMS) using soft lithography methods with molds machined out of aluminum. Tip displacements of micropillar array (300 µm in diameter and 1400 µm in height) in flow direction are recorded using a microscope mounted camera, and the displacements are analyzed using image processing with an algorithm written in MATLAB. Experiments are performed with water-glycerol solutions mixed at 4 different ratios to attain 1 cP, 5 cP, 10 cP and 15 cP viscosities at room temperature. The prepared solutions are injected into the microfluidic chips using a syringe pump at flow rates from 10-100 mL / hr and the displacement versus flow rate is plotted for different viscosities. A displacement of around 1.5 µm was observed for 15 cP solution at 60 mL / hr while only a 1 µm displacement was observed for 10 cP solution. The presented viscometer design optimization is still in progress for better sensitivity and accuracy. Our microfluidic viscometer platform has potential for tailor made microfluidic chips to enable real time observation and control of viscosity changes in biological or chemical reactions.

Keywords: Computational Fluid Dynamics (CFD), high aspect ratio, micropillar array, viscometer

Procedia PDF Downloads 221
598 Study of Influencing Factors on the Flowability of Jute Nonwoven Reinforced Sheet Molding Compound

Authors: Miriam I. Lautenschläger, Max H. Scheiwe, Kay A. Weidenmann, Frank Henning, Peter Elsner

Abstract:

Due to increasing environmental awareness jute fibers are more often used in fiber reinforced composites. In the Sheet Molding Compound (SMC) process, the mold cavity is filled via material flow allowing more complex component design. But, the difficulty of using jute fibers in this process is the decreased capacity of fiber movement in the mold. A comparative flow study with jute nonwoven reinforced SMC was conducted examining the influence of the fiber volume content, the grammage of the jute nonwoven textile and a mechanical modification of the nonwoven textile on the flowability. The nonwoven textile reinforcement was selected to support homogeneous fiber distribution. Trials were performed using two SMC paste formulations differing only in filler type. Platy-shaped kaolin with a mean particle size of 0.8 μm and ashlar calcium carbonate with a mean particle size of 2.7 μm were selected as fillers. Ensuring comparability of the two SMC paste formulations the filler content was determined to reach equal initial viscosity for both systems. The calcium carbonate filled paste was set as reference. The flow study was conducted using a jute nonwoven textile with 300 g/m² as reference. The manufactured SMC sheets were stacked and centrally placed in a square mold. The mold coverage was varied between 25 and 90% keeping the weight of the stack for comparison constant. Comparing the influence of the two fillers kaolin yielded better results regarding a homogeneous fiber distribution. A mold coverage of about 68% was already sufficient to homogeneously fill the mold cavity whereas for calcium carbonate filled system about 79% mold coverage was necessary. The flow study revealed a strong influence of the fiber volume content on the flowability. A fiber volume content of 12 vol.-% and 25 vol.-% were compared for both SMC formulations. The lower fiber volume content strongly supported fiber transport whereas 25 vol.-% showed insignificant influence. The results indicate a limiting fiber volume content for the flowability. The influence of the nonwoven textile grammage was determined using nonwoven jute material with 500 g/m² and a fiber volume content of 20 vol.-%. The 500 g/m² reinforcement material showed inferior results with regard to fiber movement. A mold coverage of about 90 % was required to prevent the destruction of the nonwoven structure. Below this mold coverage the 500 g/m² nonwoven material was ripped and torn apart. Low mold coverages led to damage of the textile reinforcement. Due to the ripped nonwoven structure the textile was modified with cuts in order to facilitate fiber movement in the mold. Parallel cuts of about 20 mm length and 20 mm distance to each other were applied to the textile and stacked with varying orientations prior to molding. Stacks with unidirectional orientated cuts over stacks with cuts in various directions e.g. (0°, 45°, 90°, -45°) were investigated. The mechanical modification supported tearing of the textile without achieving benefit for the flowability.

Keywords: filler, flowability, jute fiber, nonwoven, sheet molding compound

Procedia PDF Downloads 309
597 Investigation of Ground Disturbance Caused by Pile Driving: Case Study

Authors: Thayalan Nall, Harry Poulos

Abstract:

Piling is the most widely used foundation method for heavy structures in poor soil conditions. The geotechnical engineer can choose among a variety of piling methods, but in most cases, driving piles by impact hammer is the most cost-effective alternative. Under unfavourable conditions, driving piles can cause environmental problems, such as noise, ground movements and vibrations, with the risk of ground disturbance leading to potential damage to proposed structures. In one of the project sites in which the authors were involved, three offshore container terminals, namely CT1, CT2 and CT3, were constructed over thick compressible marine mud. The seabed was around 6m deep and the soft clay thickness within the project site varied between 9m and 20m. CT2 and CT3 were connected together and rectangular in shape and were 2600mx800m in size. CT1 was 400m x 800m in size and was located on south opposite of CT2 towards its eastern end. CT1 was constructed first and due to time and environmental limitations, it was supported on a “forest” of large diameter driven piles. CT2 and CT3 are now under construction and are being carried out using a traditional dredging and reclamation approach with ground improvement by surcharging with vertical drains. A few months after the installation of the CT1 piles, a 2600m long sand bund to 2m above mean sea level was constructed along the southern perimeter of CT2 and CT3 to contain the dredged mud that was expected to be pumped. The sand bund was constructed by sand spraying and pumping using a dredging vessel. About 2000m length of the sand bund in the west section was constructed without any major stability issues or any noticeable distress. However, as the sand bund approached the section parallel to CT1, it underwent a series of deep seated failures leading the displaced soft clay materials to heave above the standing water level. The crest of the sand bund was about 100m away from the last row of piles. There were no plausible geological reasons to conclude that the marine mud only across the CT1 region was weaker than over the rest of the site. Hence it was suspected that the pile driving by impact hammer may have caused ground movements and vibrations, leading to generation of excess pore pressures and cyclic softening of the marine mud. This paper investigates the probable cause of failure by reviewing: (1) All ground investigation data within the region; (2) Soil displacement caused by pile driving, using theories similar to spherical cavity expansion; (3) Transfer of stresses and vibrations through the entire system, including vibrations transmitted from the hammer to the pile, and the dynamic properties of the soil; and (4) Generation of excess pore pressure due to ground vibration and resulting cyclic softening. The evidence suggests that the problems encountered at the site were primarily caused by the “side effects” of the pile driving operations.

Keywords: pile driving, ground vibration, excess pore pressure, cyclic softening

Procedia PDF Downloads 205
596 Error Detection and Correction for Onboard Satellite Computers Using Hamming Code

Authors: Rafsan Al Mamun, Md. Motaharul Islam, Rabana Tajrin, Nabiha Noor, Shafinaz Qader

Abstract:

In an attempt to enrich the lives of billions of people by providing proper information, security and a way of communicating with others, the need for efficient and improved satellites is constantly growing. Thus, there is an increasing demand for better error detection and correction (EDAC) schemes, which are capable of protecting the data onboard the satellites. The paper is aimed towards detecting and correcting such errors using a special algorithm called the Hamming Code, which uses the concept of parity and parity bits to prevent single-bit errors onboard a satellite in Low Earth Orbit. This paper focuses on the study of Low Earth Orbit satellites and the process of generating the Hamming Code matrix to be used for EDAC using computer programs. The most effective version of Hamming Code generated was the Hamming (16, 11, 4) version using MATLAB, and the paper compares this particular scheme with other EDAC mechanisms, including other versions of Hamming Codes and Cyclic Redundancy Check (CRC), and the limitations of this scheme. This particular version of the Hamming Code guarantees single-bit error corrections as well as double-bit error detections. Furthermore, this version of Hamming Code has proved to be fast with a checking time of 5.669 nanoseconds, that has a relatively higher code rate and lower bit overhead compared to the other versions and can detect a greater percentage of errors per length of code than other EDAC schemes with similar capabilities. In conclusion, with the proper implementation of the system, it is quite possible to ensure a relatively uncorrupted satellite storage system.

Keywords: bit-flips, Hamming code, low earth orbit, parity bits, satellite, single error upset

Procedia PDF Downloads 102
595 Pilot-free Image Transmission System of Joint Source Channel Based on Multi-Level Semantic Information

Authors: Linyu Wang, Liguo Qiao, Jianhong Xiang, Hao Xu

Abstract:

In semantic communication, the existing joint Source Channel coding (JSCC) wireless communication system without pilot has unstable transmission performance and can not effectively capture the global information and location information of images. In this paper, a pilot-free image transmission system of joint source channel based on multi-level semantic information (Multi-level JSCC) is proposed. The transmitter of the system is composed of two networks. The feature extraction network is used to extract the high-level semantic features of the image, compress the information transmitted by the image, and improve the bandwidth utilization. Feature retention network is used to preserve low-level semantic features and image details to improve communication quality. The receiver also is composed of two networks. The received high-level semantic features are fused with the low-level semantic features after feature enhancement network in the same dimension, and then the image dimension is restored through feature recovery network, and the image location information is effectively used for image reconstruction. This paper verifies that the proposed multi-level JSCC algorithm can effectively transmit and recover image information in both AWGN channel and Rayleigh fading channel, and the peak signal-to-noise ratio (PSNR) is improved by 1~2dB compared with other algorithms under the same simulation conditions.

Keywords: deep learning, JSCC, pilot-free picture transmission, multilevel semantic information, robustness

Procedia PDF Downloads 88
594 Theorizing Optimal Use of Numbers and Anecdotes: The Science of Storytelling in Newsrooms

Authors: Hai L. Tran

Abstract:

When covering events and issues, the news media often employ both personal accounts as well as facts and figures. However, the process of using numbers and narratives in the newsroom is mostly operated through trial and error. There is a demonstrated need for the news industry to better understand the specific effects of storytelling and data-driven reporting on the audience as well as explanatory factors driving such effects. In the academic world, anecdotal evidence and statistical evidence have been studied in a mutually exclusive manner. Existing research tends to treat pertinent effects as though the use of one form precludes the other and as if a tradeoff is required. Meanwhile, narratives and statistical facts are often combined in various communication contexts, especially in news presentations. There is value in reconceptualizing and theorizing about both relative and collective impacts of numbers and narratives as well as the mechanism underlying such effects. The current undertaking seeks to link theory to practice by providing a complete picture of how and why people are influenced by information conveyed through quantitative and qualitative accounts. Specifically, the cognitive-experiential theory is invoked to argue that humans employ two distinct systems to process information. The rational system requires the processing of logical evidence effortful analytical cognitions, which are affect-free. Meanwhile, the experiential system is intuitive, rapid, automatic, and holistic, thereby demanding minimum cognitive resources and relating to the experience of affect. In certain situations, one system might dominate the other, but rational and experiential modes of processing operations in parallel and at the same time. As such, anecdotes and quantified facts impact audience response differently and a combination of data and narratives is more effective than either form of evidence. In addition, the present study identifies several media variables and human factors driving the effects of statistics and anecdotes. An integrative model is proposed to explain how message characteristics (modality, vividness, salience, congruency, position) and individual differences (involvement, numeracy skills, cognitive resources, cultural orientation) impact selective exposure, which in turn activates pertinent modes of processing, and thereby induces corresponding responses. The present study represents a step toward bridging theoretical frameworks from various disciplines to better understand the specific effects and the conditions under which the use of anecdotal evidence and/or statistical evidence enhances or undermines information processing. In addition to theoretical contributions, this research helps inform news professionals about the benefits and pitfalls of incorporating quantitative and qualitative accounts in reporting. It proposes a typology of possible scenarios and appropriate strategies for journalists to use when presenting news with anecdotes and numbers.

Keywords: data, narrative, number, anecdote, storytelling, news

Procedia PDF Downloads 57
593 Developmental Relationships between Alcohol Problems and Internalising Symptoms in a Longitudinal Sample of College Students

Authors: Lina E. Homman, Alexis C. Edwards, Seung Bin Cho, Danielle M. Dick, Kenneth S. Kendler

Abstract:

Research supports an association between alcohol problems and internalising symptoms, but the understanding of how the two phenotypes relate to each other is poor. It has been hypothesized that the relationship between the phenotypes is causal; however investigations in regards to direction are inconsistent. Clarity of the relationship between the two phenotypes may be provided by investigating the phenotypes developmental inter-relationships longitudinally. The objective of the study was to investigate a) changes in alcohol problems and internalising symptoms in college students across time and b) the direction of effect of growth between alcohol problems and internalising symptoms from late adolescent to emerging adulthood c) possible gender differences. The present study adds to the knowledge of comorbidity of alcohol problems and internalising symptoms by examining a longitudinal sample of college students and by examining the simultaneous development of the symptoms. A sample of college students is of particular interest as symptoms of both phenotypes often have their onset around this age. A longitudinal sample of college students from a large, urban, public university in the United States was used. Data was collected over a time period of 2 years at 3 time points. Latent growth models were applied to examine growth trajectories. Parallel process growth models were used to assess whether initial level and rate of change of one symptom affected the initial level and rate of change of the second symptom. Possible effects of gender and ethnicity were investigated. Alcohol problems significantly increased over time, whereas internalizing symptoms remained relatively stable. The two phenotypes were significantly correlated in each wave, correlations were stronger among males. Initial level of alcohol problems was significantly positively correlated with initial level of internalising symptoms. Rate of change of alcohol problems positively predicted rate of change of internalising symptoms for females but not for males. Rate of change of internalising symptoms did not predict rate of change of alcohol problems for either gender. Participants of Black and Asian ethnicities indicated significantly lower levels of alcohol problems and a lower increase of internalising symptoms across time, compared to White participants. Participants of Black ethnicity also reported significantly lower levels of internalising symptoms compared to White participants. The present findings provide additional support for a positive relationship between alcohol problems and internalising symptoms in youth. Our findings indicated that both internalising symptoms and alcohol problems increased throughout the sample and that the phenotypes were correlated. The findings mainly implied a bi-directional relationship between the phenotypes in terms of significant associations between initial levels as well as rate of change. No direction of causality was indicated in males but significant results were found in females where alcohol problems acted as the main driver for the comorbidity of alcohol problems and internalising symptoms; alcohol may have more detrimental effects in females than in males. Importantly, our study examined a population-based longitudinal sample of college students, revealing that the observed relationships are not limited to individuals with clinically diagnosed mental health or substance use problems.

Keywords: alcohol, comorbidity, internalising symptoms, longitudinal modelling

Procedia PDF Downloads 321
592 Model-Based Fault Diagnosis in Carbon Fiber Reinforced Composites Using Particle Filtering

Authors: Hong Yu, Ion Matei

Abstract:

Carbon fiber reinforced composites (CFRP) used as aircraft structure are subject to lightning strike, putting structural integrity under risk. Indirect damage may occur after a lightning strike where the internal structure can be damaged due to excessive heat induced by lightning current, while the surface of the structures remains intact. Three damage modes may be observed after a lightning strike: fiber breakage, inter-ply delamination and intra-ply cracks. The assessment of internal damage states in composite is challenging due to complicated microstructure, inherent uncertainties, and existence of multiple damage modes. In this work, a model based approach is adopted to diagnose faults in carbon composites after lighting strikes. A resistor network model is implemented to relate the overall electrical and thermal conduction behavior under simulated lightning current waveform to the intrinsic temperature dependent material properties, microstructure and degradation of materials. A fault detection and identification (FDI) module utilizes the physics based model and a particle filtering algorithm to identify damage mode as well as calculate the probability of structural failure. Extensive simulation results are provided to substantiate the proposed fault diagnosis methodology with both single fault and multiple faults cases. The approach is also demonstrated on transient resistance data collected from a IM7/Epoxy laminate under simulated lightning strike.

Keywords: carbon composite, fault detection, fault identification, particle filter

Procedia PDF Downloads 172
591 Pattern of Adverse Drug Reactions with Platinum Compounds in Cancer Chemotherapy at a Tertiary Care Hospital in South India

Authors: Meena Kumari, Ajitha Sharma, Mohan Babu Amberkar, Hasitha Manohar, Joseph Thomas, K. L. Bairy

Abstract:

Aim: To evaluate the pattern of occurrence of adverse drug reactions (ADRs) with platinum compounds in cancer chemotherapy at a tertiary care hospital. Methods: It was a retrospective, descriptive case record study done on patients admitted to the medical oncology ward of Kasturba Hospital, Manipal from July to November 2012. Inclusion criteria comprised of patients of both sexes and all ages diagnosed with cancer and were on platinum compounds, who developed at least one adverse drug reaction during or after the treatment period. CDSCO proforma was used for reporting ADRs. Causality was assessed using Naranjo Algorithm. Results: A total of 65 patients was included in the study. Females comprised of 67.69% and rest males. Around 49.23% of the ADRs were seen in the age group of 41-60 years, followed by 20 % in 21-40 years, 18.46% in patients over 60 years and 12.31% in 1-20 years age group. The anticancer agents which caused adverse drug reactions in our study were carboplatin (41.54%), cisplatin (36.92%) and oxaliplatin (21.54%). Most common adverse drug reactions observed were oral candidiasis (21.53%), vomiting (16.92%), anaemia (12.3%), diarrhoea (12.3%) and febrile neutropenia (0.08%). The results of the causality assessment of most of the cases were probable. Conclusion: The adverse effect of chemotherapeutic agents is a matter of concern in the pharmacological management of cancer as it affects the quality of life of patients. This information would be useful in identifying and minimizing preventable adverse drug reactions while generally enhancing the knowledge of the prescribers to deal with these adverse drug reactions more efficiently.

Keywords: adverse drug reactions, platinum compounds, cancer, chemotherapy

Procedia PDF Downloads 397
590 Heuristics for Optimizing Power Consumption in the Smart Grid

Authors: Zaid Jamal Saeed Almahmoud

Abstract:

Our increasing reliance on electricity, with inefficient consumption trends, has resulted in several economical and environmental threats. These threats include wasting billions of dollars, draining limited resources, and elevating the impact of climate change. As a solution, the smart grid is emerging as the future power grid, with smart techniques to optimize power consumption and electricity generation. Minimizing the peak power consumption under a fixed delay requirement is a significant problem in the smart grid. In addition, matching demand to supply is a key requirement for the success of the future electricity. In this work, we consider the problem of minimizing the peak demand under appliances constraints by scheduling power jobs with uniform release dates and deadlines. As the problem is known to be NP-Hard, we propose two versions of a heuristic algorithm for solving this problem. Our theoretical analysis and experimental results show that our proposed heuristics outperform existing methods by providing a better approximation to the optimal solution. In addition, we consider dynamic pricing methods to minimize the peak load and match demand to supply in the smart grid. Our contribution is the proposal of generic, as well as customized pricing heuristics to minimize the peak demand and match demand with supply. In addition, we propose optimal pricing algorithms that can be used when the maximum deadline period of the power jobs is relatively small. Finally, we provide theoretical analysis and conduct several experiments to evaluate the performance of the proposed algorithms.

Keywords: heuristics, optimization, smart grid, peak demand, power supply

Procedia PDF Downloads 61
589 Applying Kinect on the Development of a Customized 3D Mannequin

Authors: Shih-Wen Hsiao, Rong-Qi Chen

Abstract:

In the field of fashion design, 3D Mannequin is a kind of assisting tool which could rapidly realize the design concepts. While the concept of 3D Mannequin is applied to the computer added fashion design, it will connect with the development and the application of design platform and system. Thus, the situation mentioned above revealed a truth that it is very critical to develop a module of 3D Mannequin which would correspond with the necessity of fashion design. This research proposes a concrete plan that developing and constructing a system of 3D Mannequin with Kinect. In the content, ergonomic measurements of objective human features could be attained real-time through the implement with depth camera of Kinect, and then the mesh morphing can be implemented through transformed the locations of the control-points on the model by inputting those ergonomic data to get an exclusive 3D mannequin model. In the proposed methodology, after the scanned points from the Kinect are revised for accuracy and smoothening, a complete human feature would be reconstructed by the ICP algorithm with the method of image processing. Also, the objective human feature could be recognized to analyze and get real measurements. Furthermore, the data of ergonomic measurements could be applied to shape morphing for the division of 3D Mannequin reconstructed by feature curves. Due to a standardized and customer-oriented 3D Mannequin would be generated by the implement of subdivision, the research could be applied to the fashion design or the presentation and display of 3D virtual clothes. In order to examine the practicality of research structure, a system of 3D Mannequin would be constructed with JAVA program in this study. Through the revision of experiments the practicability-contained research result would come out.

Keywords: 3D mannequin, kinect scanner, interactive closest point, shape morphing, subdivision

Procedia PDF Downloads 279
588 Prediction Modeling of Alzheimer’s Disease and Its Prodromal Stages from Multimodal Data with Missing Values

Authors: M. Aghili, S. Tabarestani, C. Freytes, M. Shojaie, M. Cabrerizo, A. Barreto, N. Rishe, R. E. Curiel, D. Loewenstein, R. Duara, M. Adjouadi

Abstract:

A major challenge in medical studies, especially those that are longitudinal, is the problem of missing measurements which hinders the effective application of many machine learning algorithms. Furthermore, recent Alzheimer's Disease studies have focused on the delineation of Early Mild Cognitive Impairment (EMCI) and Late Mild Cognitive Impairment (LMCI) from cognitively normal controls (CN) which is essential for developing effective and early treatment methods. To address the aforementioned challenges, this paper explores the potential of using the eXtreme Gradient Boosting (XGBoost) algorithm in handling missing values in multiclass classification. We seek a generalized classification scheme where all prodromal stages of the disease are considered simultaneously in the classification and decision-making processes. Given the large number of subjects (1631) included in this study and in the presence of almost 28% missing values, we investigated the performance of XGBoost on the classification of the four classes of AD, NC, EMCI, and LMCI. Using 10-fold cross validation technique, XGBoost is shown to outperform other state-of-the-art classification algorithms by 3% in terms of accuracy and F-score. Our model achieved an accuracy of 80.52%, a precision of 80.62% and recall of 80.51%, supporting the more natural and promising multiclass classification.

Keywords: eXtreme gradient boosting, missing data, Alzheimer disease, early mild cognitive impairment, late mild cognitive impair, multiclass classification, ADNI, support vector machine, random forest

Procedia PDF Downloads 159
587 Fragment Domination for Many-Objective Decision-Making Problems

Authors: Boris Djartov, Sanaz Mostaghim

Abstract:

This paper presents a number-based dominance method. The main idea is how to fragment the many attributes of the problem into subsets suitable for the well-established concept of Pareto dominance. Although other similar methods can be found in the literature, they focus on comparing the solutions one objective at a time, while the focus of this method is to compare entire subsets of the objective vector. Given the nature of the method, it is computationally costlier than other methods and thus, it is geared more towards selecting an option from a finite set of alternatives, where each solution is defined by multiple objectives. The need for this method was motivated by dynamic alternate airport selection (DAAS). In DAAS, pilots, while en route to their destination, can find themselves in a situation where they need to select a new landing airport. In such a predicament, they need to consider multiple alternatives with many different characteristics, such as wind conditions, available landing distance, the fuel needed to reach it, etc. Hence, this method is primarily aimed at human decision-makers. Many methods within the field of multi-objective and many-objective decision-making rely on the decision maker to initially provide the algorithm with preference points and weight vectors; however, this method aims to omit this very difficult step, especially when the number of objectives is so large. The proposed method will be compared to Favour (1 − k)-Dom and L-dominance (LD) methods. The test will be conducted using well-established test problems from the literature, such as the DTLZ problems. The proposed method is expected to outperform the currently available methods in the literature and hopefully provide future decision-makers and pilots with support when dealing with many-objective optimization problems.

Keywords: multi-objective decision-making, many-objective decision-making, multi-objective optimization, many-objective optimization

Procedia PDF Downloads 58
586 Artificial Intelligence-Generated Previews of Hyaluronic Acid-Based Treatments

Authors: Ciro Cursio, Giulia Cursio, Pio Luigi Cursio, Luigi Cursio

Abstract:

Communication between practitioner and patient is of the utmost importance in aesthetic medicine: as of today, images of previous treatments are the most common tool used by doctors to describe and anticipate future results for their patients. However, using photos of other people often reduces the engagement of the prospective patient and is further limited by the number and quality of pictures available to the practitioner. Pre-existing work solves this issue in two ways: 3D scanning of the area with manual editing of the 3D model by the doctor or automatic prediction of the treatment by warping the image with hand-written parameters. The first approach requires the manual intervention of the doctor, while the second approach always generates results that aren’t always realistic. Thus, in one case, there is significant manual work required by the doctor, and in the other case, the prediction looks artificial. We propose an AI-based algorithm that autonomously generates a realistic prediction of treatment results. For the purpose of this study, we focus on hyaluronic acid treatments in the facial area. Our approach takes into account the individual characteristics of each face, and furthermore, the prediction system allows the patient to decide which area of the face she wants to modify. We show that the predictions generated by our system are realistic: first, the quality of the generated images is on par with real images; second, the prediction matches the actual results obtained after the treatment is completed. In conclusion, the proposed approach provides a valid tool for doctors to show patients what they will look like before deciding on the treatment.

Keywords: prediction, hyaluronic acid, treatment, artificial intelligence

Procedia PDF Downloads 87
585 Quantum Statistical Machine Learning and Quantum Time Series

Authors: Omar Alzeley, Sergey Utev

Abstract:

Minimizing a constrained multivariate function is the fundamental of Machine learning, and these algorithms are at the core of data mining and data visualization techniques. The decision function that maps input points to output points is based on the result of optimization. This optimization is the central of learning theory. One approach to complex systems where the dynamics of the system is inferred by a statistical analysis of the fluctuations in time of some associated observable is time series analysis. The purpose of this paper is a mathematical transition from the autoregressive model of classical time series to the matrix formalization of quantum theory. Firstly, we have proposed a quantum time series model (QTS). Although Hamiltonian technique becomes an established tool to detect a deterministic chaos, other approaches emerge. The quantum probabilistic technique is used to motivate the construction of our QTS model. The QTS model resembles the quantum dynamic model which was applied to financial data. Secondly, various statistical methods, including machine learning algorithms such as the Kalman filter algorithm, are applied to estimate and analyses the unknown parameters of the model. Finally, simulation techniques such as Markov chain Monte Carlo have been used to support our investigations. The proposed model has been examined by using real and simulated data. We establish the relation between quantum statistical machine and quantum time series via random matrix theory. It is interesting to note that the primary focus of the application of QTS in the field of quantum chaos was to find a model that explain chaotic behaviour. Maybe this model will reveal another insight into quantum chaos.

Keywords: machine learning, simulation techniques, quantum probability, tensor product, time series

Procedia PDF Downloads 431
584 Protective Effect of Cinnamomum zeylanicum Bark Extract against Doxorubicin Induced Cardiotoxicity: A Preliminary Study

Authors: J. A. N. Sandamali, R. P. Hewawasam, K. A. P. W. Jayatilaka, L. K. B. Mudduwa

Abstract:

Introduction: Doxorubicin is widely used in the treatment of solid organ tumors and hematological malignancies, but the dose-dependent cardiotoxicity due to free radical formation compromises its clinical utility. Therapeutic strategies which enhance cellular endogenous defense systems have been identified as promising approaches to combat oxidative stress-associated conditions. Cinnamomum zeylanicum (Ceylon cinnamon) has a number antioxidant compounds, which can effectively scavenge reactive oxygen including superoxide anions, hydroxyl radicals and as well as other free radicals. Therefore, the objective of the study was to elucidate the most effective dose of Cinnamomum bark extract which ameliorates doxorubicin-induced cardiotoxicity. Materials and methods: Wistar rats were divided into seven groups of 10 animals in each. Group 1: normal control (distilled water, orally, for 14 days, 10 mL/kg saline, ip, after 16 hours fast on the 11th day); Group 2: doxorubicin control (distilled water, orally, for 14 days, 18 mg/kg doxorubicin, ip, after 16 hour fast on the 11th day); Groups 3-7: five doses of freeze dried aqueous bark extracts (0.125, 0.25, 0.5, 1.0, 2.0g/kg, orally, daily for 14 days, 18 mg/kg doxorubicin, ip, after 16 hours fast on the 11th day). Animals were sacrificed on the 15th day and blood was collected for the estimation of cardiac troponin I (cTnI), AST and LDH concentrations and myocardial tissues were collected for histopathological assessment of myocardial damage and irreversible changes were graded by developing a score. Results: cTnI concentration of groups 1-7 were 0, 161.9, 128.6, 95.9, 38, 19.41 & 12.36 pg/mL showing significant differences (p<0.05) between group 2 and groups 4-7. In groups 1-7, serum AST concentration were 26.82, 68.1, 37.18, 36.23, 26.8, 26.62 & 22.43U/L and LDH concentrations were 1166.13, 2428.84, 1658.35, 1474.34, 1277.58, 1110.21 & 974.40U/L and a significant difference (p<0.05) was observed between group 2 and groups 3-7. The maximum score for myocardial necrosis was observed in group 2. Parallel to the increase of the dosage of plant extract, a gradual reduction of the score for myocardial necrosis was observed in groups 3-7. Reversible histological changes such as vacuolation, congestion were observed in group 2 and all plant treated groups. Haemorrhages, inflammatory cell infiltrations, and interstitial oedema were observed in group 2, but absent in groups treated with higher doses of the plant extract. Discussion & Conclusion: According to the in vitro antioxidant assays performed, Cinnamomum zeylanicum (Ceylon cinnamon) bark possesses high amounts of polyphenolic substances and high antioxidant activity. The present study showed that Cinnamomum zeylanicum extract at 2.0 g/kg possesses the most significant cardioprotective effect against doxorubicin-induced cardiotoxicity. It can be postulated that pretreatment with Cinnamomum bark extract may replenish the cardiomyocytes with antioxidants that are needed for the defense against oxidative stress induced by doxorubicin.

Keywords: cardioprotection, Cinnamomum zeylanicum, doxorubicin, free radicals

Procedia PDF Downloads 137
583 Statistical Analysis to Compare between Smart City and Traditional Housing

Authors: Taha Anjamrooz, Sareh Rajabi, Ayman Alzaatreh

Abstract:

Smart cities are playing important roles in real life. Integration and automation between different features of modern cities and information technologies improve smart city efficiency, energy management, human and equipment resource management, life quality and better utilization of resources for the customers. One of difficulties in this path, is use, interface and link between software, hardware, and other IT technologies to develop and optimize processes in various business fields such as construction, supply chain management and transportation in parallel to cost-effective and resource reduction impacts. Also, Smart cities are certainly intended to demonstrate a vital role in offering a sustainable and efficient model for smart houses while mitigating environmental and ecological matters. Energy management is one of the most important matters within smart houses in the smart cities and communities, because of the sensitivity of energy systems, reduction in energy wastage and maximization in utilizing the required energy. Specially, the consumption of energy in the smart houses is important and considerable in the economic balance and energy management in smart city as it causes significant increment in energy-saving and energy-wastage reduction. This research paper develops features and concept of smart city in term of overall efficiency through various effective variables. The selected variables and observations are analyzed through data analysis processes to demonstrate the efficiency of smart city and compare the effectiveness of each variable. There are ten chosen variables in this study to improve overall efficiency of smart city through increasing effectiveness of smart houses using an automated solar photovoltaic system, RFID System, smart meter and other major elements by interfacing between software and hardware devices as well as IT technologies. Secondly to enhance aspect of energy management by energy-saving within smart house through efficient variables. The main objective of smart city and smart houses is to reproduce energy and increase its efficiency through selected variables with a comfortable and harmless atmosphere for the customers within a smart city in combination of control over the energy consumption in smart house using developed IT technologies. Initially the comparison between traditional housing and smart city samples is conducted to indicate more efficient system. Moreover, the main variables involved in measuring overall efficiency of system are analyzed through various processes to identify and prioritize the variables in accordance to their influence over the model. The result analysis of this model can be used as comparison and benchmarking with traditional life style to demonstrate the privileges of smart cities. Furthermore, due to expensive and expected shortage of natural resources in near future, insufficient and developed research study in the region, and available potential due to climate and governmental vision, the result and analysis of this study can be used as key indicator to select most effective variables or devices during construction phase and design

Keywords: smart city, traditional housing, RFID, photovoltaic system, energy efficiency, energy saving

Procedia PDF Downloads 86
582 Drought Risk Analysis Using Neural Networks for Agri-Businesses and Projects in Lejweleputswa District Municipality, South Africa

Authors: Bernard Moeketsi Hlalele

Abstract:

Drought is a complicated natural phenomenon that creates significant economic, social, and environmental problems. An analysis of paleoclimatic data indicates that severe and extended droughts are inevitable part of natural climatic circle. This study characterised drought in Lejweleputswa using both Standardised Precipitation Index (SPI) and neural networks (NN) to quantify and predict respectively. Monthly 37-year long time series precipitation data were obtained from online NASA database. Prior to the final analysis, this dataset was checked for outliers using SPSS. Outliers were removed and replaced by Expectation Maximum algorithm from SPSS. This was followed by both homogeneity and stationarity tests to ensure non-spurious results. A non-parametric Mann Kendall's test was used to detect monotonic trends present in the dataset. Two temporal scales SPI-3 and SPI-12 corresponding to agricultural and hydrological drought events showed statistically decreasing trends with p-value = 0.0006 and 4.9 x 10⁻⁷, respectively. The study area has been plagued with severe drought events on SPI-3, while on SPI-12, it showed approximately a 20-year circle. The concluded the analyses with a seasonal analysis that showed no significant trend patterns, and as such NN was used to predict possible SPI-3 for the last season of 2018/2019 and four seasons for 2020. The predicted drought intensities ranged from mild to extreme drought events to come. It is therefore recommended that farmers, agri-business owners, and other relevant stakeholders' resort to drought resistant crops as means of adaption.

Keywords: drought, risk, neural networks, agri-businesses, project, Lejweleputswa

Procedia PDF Downloads 94
581 Secure Automatic Key SMS Encryption Scheme Using Hybrid Cryptosystem: An Approach for One Time Password Security Enhancement

Authors: Pratama R. Yunia, Firmansyah, I., Ariani, Ulfa R. Maharani, Fikri M. Al

Abstract:

Nowadays, notwithstanding that the role of SMS as a means of communication has been largely replaced by online applications such as WhatsApp, Telegram, and others, the fact that SMS is still used for certain and important communication needs is indisputable. Among them is for sending one time password (OTP) as an authentication media for various online applications ranging from chatting, shopping to online banking applications. However, the usage of SMS does not pretty much guarantee the security of transmitted messages. As a matter of fact, the transmitted messages between BTS is still in the form of plaintext, making it extremely vulnerable to eavesdropping, especially if the message is confidential, for instance, the OTP. One solution to overcome this problem is to use an SMS application which provides security services for each transmitted message. Responding to this problem, in this study, an automatic key SMS encryption scheme was designed as a means to secure SMS communication. The proposed scheme allows SMS sending, which is automatically encrypted with keys that are constantly changing (automatic key update), automatic key exchange, and automatic key generation. In terms of the security method, the proposed scheme applies cryptographic techniques with a hybrid cryptosystem mechanism. Proofing the proposed scheme, a client to client SMS encryption application was developed using Java platform with AES-256 as encryption algorithm, RSA-768 as public and private key generator and SHA-256 for message hashing function. The result of this study is a secure automatic key SMS encryption scheme using hybrid cryptosystem which can guarantee the security of every transmitted message, so as to become a reliable solution in sending confidential messages through SMS although it still has weaknesses in terms of processing time.

Keywords: encryption scheme, hybrid cryptosystem, one time password, SMS security

Procedia PDF Downloads 103
580 Real-Time Generative Architecture for Mesh and Texture

Authors: Xi Liu, Fan Yuan

Abstract:

In the evolving landscape of physics-based machine learning (PBML), particularly within fluid dynamics and its applications in electromechanical engineering, robot vision, and robot learning, achieving precision and alignment with researchers' specific needs presents a formidable challenge. In response, this work proposes a methodology that integrates neural transformation with a modified smoothed particle hydrodynamics model for generating transformed 3D fluid simulations. This approach is useful for nanoscale science, where the unique and complex behaviors of viscoelastic medium demand accurate neurally-transformed simulations for materials understanding and manipulation. In electromechanical engineering, the method enhances the design and functionality of fluid-operated systems, particularly microfluidic devices, contributing to advancements in nanomaterial design, drug delivery systems, and more. The proposed approach also aligns with the principles of PBML, offering advantages such as multi-fluid stylization and consistent particle attribute transfer. This capability is valuable in various fields where the interaction of multiple fluid components is significant. Moreover, the application of neurally-transformed hydrodynamical models extends to manufacturing processes, such as the production of microelectromechanical systems, enhancing efficiency and cost-effectiveness. The system's ability to perform neural transfer on 3D fluid scenes using a deep learning algorithm alongside physical models further adds a layer of flexibility, allowing researchers to tailor simulations to specific needs across scientific and engineering disciplines.

Keywords: physics-based machine learning, robot vision, robot learning, hydrodynamics

Procedia PDF Downloads 36
579 Modeling of Sediment Yield and Streamflow of Watershed Basin in the Philippines Using the Soil Water Assessment Tool Model for Watershed Sustainability

Authors: Warda L. Panondi, Norihiro Izumi

Abstract:

Sedimentation is a significant threat to the sustainability of reservoirs and their watershed. In the Philippines, the Pulangi watershed experienced a high sediment loss mainly due to land conversions and plantations that showed critical erosion rates beyond the tolerable limit of -10 ton/ha/yr in all of its sub-basin. From this event, the prediction of runoff volume and sediment yield is essential to examine using the country's soil conservation techniques realistically. In this research, the Pulangi watershed was modeled using the soil water assessment tool (SWAT) to predict its watershed basin's annual runoff and sediment yield. For the calibration and validation of the model, the SWAT-CUP was utilized. The model was calibrated with monthly discharge data for 1990-1993 and validated for 1994-1997. Simultaneously, the sediment yield was calibrated in 2014 and validated in 2015 because of limited observed datasets. Uncertainty analysis and calculation of efficiency indexes were accomplished through the SUFI-2 algorithm. According to the coefficient of determination (R2), Nash Sutcliffe efficiency (NSE), King-Gupta efficiency (KGE), and PBIAS, the calculation of streamflow indicates a good performance for both calibration and validation periods while the sediment yield resulted in a satisfactory performance for both calibration and validation. Therefore, this study was able to identify the most critical sub-basin and severe needs of soil conservation. Furthermore, this study will provide baseline information to prevent floods and landslides and serve as a useful reference for land-use policies and watershed management and sustainability in the Pulangi watershed.

Keywords: Pulangi watershed, sediment yield, streamflow, SWAT model

Procedia PDF Downloads 172
578 The ‘Othered’ Body: Deafness and Disability in Nina Raine’s Tribes

Authors: Nurten Çelik

Abstract:

Under the new developments in science, medicine, sociology, psychology and literary theories, body studies has gained huge importance and the body has become a debatable issue. There has emerged, among sociologists and literary theorists, an overwhelming consensus that body is socially, politically and culturally perceived and constructed and thus, the position of an individual in the society is determined in accordance with his/her body image. In this regard, the most complicated point is the theoretical views propounded upon disability studies, where the disabled body is considered to be a site upon which social and political restrictions as well as repressions are inscribed. There has been the widely-accepted view that no matter what kind of disability it is, those with physical, mental or learning impairments face varied social, political and environmental obstacles that prevent them from being an active citizen, worker, lover and even a family member. In parallel with these approaches, the matter of the sufferings of disabled individuals attains its place in cinema and literature as well as in theatre studies under the category of disability theatre. One of the prominent plays that deal with physical disability came from the contemporary British playwright Nina Raine. In her awarded play Tribes, which premiered at the Royal Court Theatre in 2010, Raine develops the social strata where her deaf protagonist, Billy, caught up between two tribes – namely his family and his lover Slyvia, a member of the deaf community– experiences personal and social hardships due to his hearing impairment. In the play, intransigent and self-opinionated family members foster no sense of empathy towards Billy, there are noisy talking and shouting, but no communication, love, compassion or mutual understanding, and language becomes just a tool for the expression of rage and oppression. In the disordered atmosphere of the family life, Billy experiences isolation and loneliness. Billy’s hopes for success and love are destroyed when Slyvia, troubled between hearing and deafness, rejects him because she does not utterly grasp what Billy is experiencing. Drawing upon the hardships, Billy undergoes in his relationships with his family and his girlfriend, Tribes problematizes the concept of deafness and explores to what extent a deaf person can find a place in the hearing world. Setting ‘the disabled’ bodies against ‘the abled’ bodies in a family, a microcosm of the society where bodies are socially shaped and constructed, Tribes dramatizes how the disabled bodies are disenfranchised, stigmatised, marginalized and othered on the grounds that they are socially misfit. Tribes, with a specific focus on the dysfunctional family, shows that the lack of communication and empathy numbs the characters to the feelings of each other and thereby, they become more disabled than Billy. In conclusion, this paper, with the reference to the embodiment of disability and social theories, aims to explore how disabled bodies are socially marked and segregated from family and society.

Keywords: body, deafness, disability, disability theatre, Nina Raine, tribes

Procedia PDF Downloads 212
577 Investigating the Strategies for Managing On-plot Sanitation Systems’ Faecal Waste in Developing Regions: The Case of Ogun State, Nigeria

Authors: Olasunkanmi Olapeju

Abstract:

A large chunk of global population are not yet connected to water borne faecal management systems that rely on flush mechanisms and sewers networks that are linked with a central treatment plant. Only about 10% of sub-Saharan African countries are connected to central sewage systems. In Nigeria, majority of the population do not only depend on on-plot sanitation systems, a huge chunk do not also have access to safe and improved toilets. Apart from the organizational challenges and technical capacity, the other major factors that account for why faecal waste management is yet unimproved in developing countries are faulty planning frameworks that fail to maintain balance between urbanization dynamics and infrastructures, and misconceptions about what modern sanitation is all about. In most cases, the quest to implement developmental patterns that integrate modern sewers based sanitation systems have huge financial and political costs. Faecal waste management in poor countries largely lacks the needed political attention and budgetary prioritization. Yet, the on-plot sanitation systems being mainly relied upon the need to be managed in a manner that is sustainable and healthy, pending when development would embrace a more sustainable off-site central sewage system. This study is aimed at investigating existing strategies for managing on-plot sanitation systems’ faecal waste in Ogun state, Nigeria, with the aim of recommending sustainable sanitation management systems. The study adopted the convergent parallel variant of the mixed-mode technique, which involves the adoption of both quantitative and qualitative method of data collection. Adopting a four-level multi-stage approach, which is inclusive of all political divisions in the study area, a total of 330 questionnaires were respectively administered in the study area. Moreover, the qualitative data adopted the purposive approach in scoping down to 33 key informants. SPSS software (Version 22.0) was employed for descriptively analysis. The study shows that about 52% of households adopt the non-recovery management (NRM) means of burying their latrines with sand sludge shrinkage with chemicals such as carbides. The dominance of the non-recovery management means seriously constrains the quest for faecal resource recovery. Essentially, the management techniques adopted by households depend largely on the technology of their sanitary containments, emptying means available, the ability of households to pay for the cost of emptying, and the social acceptability of the reusability of faecal waste, which determines faecal resource recoverability. The study suggests that there is a need for municipal authorities in the study area to urgently intervene in the sanitation sector and consider it a key element of the planning process. There is a need for a comprehensive plan that would ensure a seamless transition to the adoption of a modern sanitation management system.

Keywords: faecal, management, planning, waste, sanitation, sustainability

Procedia PDF Downloads 77
576 Modelling and Simulation Efforts in Scale-Up and Characterization of Semi-Solid Dosage Forms

Authors: Saurav S. Rath, Birendra K. David

Abstract:

Generic pharmaceutical industry has to operate in strict timelines of product development and scale-up from lab to plant. Hence, detailed product & process understanding and implementation of appropriate mechanistic modelling and Quality-by-design (QbD) approaches are imperative in the product life cycle. This work provides example cases of such efforts in topical dosage products. Topical products are typically in the form of emulsions, gels, thick suspensions or even simple solutions. The efficacy of such products is determined by characteristics like rheology and morphology. Defining, and scaling up the right manufacturing process with a given set of ingredients, to achieve the right product characteristics presents as a challenge to the process engineer. For example, the non-Newtonian rheology varies not only with CPPs and CMAs but also is an implicit function of globule size (CQA). Hence, this calls for various mechanistic models, to help predict the product behaviour. This paper focusses on such models obtained from computational fluid dynamics (CFD) coupled with population balance modelling (PBM) and constitutive models (like shear, energy density). In a special case of the use of high shear homogenisers (HSHs) for the manufacture of thick emulsions/gels, this work presents some findings on (i) scale-up algorithm for HSH using shear strain, a novel scale-up parameter for estimating mixing parameters, (ii) non-linear relationship between viscosity and shear imparted into the system, (iii) effect of hold time on rheology of product. Specific examples of how this approach enabled scale-up across 1L, 10L, 200L, 500L and 1000L scales will be discussed.

Keywords: computational fluid dynamics, morphology, quality-by-design, rheology

Procedia PDF Downloads 240