Search results for: key elements of success
1139 Analysis of Delays during Initial Phase of Construction Projects and Mitigation Measures
Authors: Sunaitan Al Mutairi
Abstract:
A perfect start is a key factor for project completion on time. The study examined the effects of delayed mobilization of resources during the initial phases of the project. This paper mainly highlights the identification and categorization of all delays during the initial construction phase and their root cause analysis with corrective/control measures for the Kuwait Oil Company oil and gas projects. A relatively good percentage of the delays identified during the project execution (Contract award to end of defects liability period) attributed to mobilization/preliminary activity delays. Data analysis demonstrated significant increase in average project delay during the last five years compared to the previous period. Contractors had delays/issues during the initial phase, which resulted in slippages and progressively increased, resulting in time and cost overrun. Delays/issues not mitigated on time during the initial phase had very high impact on project completion. Data analysis of the delays for the past five years was carried out using trend chart, scatter plot, process map, box plot, relative importance index and Pareto chart. Construction of any project inside the Gathering Centers involves complex management skills related to work force, materials, plant, machineries, new technologies etc. Delay affects completion of projects and compromises quality, schedule and budget of project deliverables. Works executed as per plan during the initial phase and start-up duration of the project construction activities resulted in minor slippages/delays in project completion. In addition, there was a good working environment between client and contractor resulting in better project execution and management. Mainly, the contractor was on the front foot in the execution of projects, which had minimum/no delays during the initial and construction period. Hence, having a perfect start during the initial construction phase shall have a positive influence on the project success. Our research paper studies each type of delay with some real example supported by statistic results and suggests mitigation measures. Detailed analysis carried out with all stakeholders based on impact and occurrence of delays to have a practical and effective outcome to mitigate the delays. The key to improvement is to have proper control measures and periodic evaluation/audit to ensure implementation of the mitigation measures. The focus of this research is to reduce the delays encountered during the initial construction phase of the project life cycle.Keywords: construction activities delays, delay analysis for construction projects, mobilization delays, oil & gas projects delays
Procedia PDF Downloads 3181138 Using Signature Assignments and Rubrics in Assessing Institutional Learning Outcomes and Student Learning
Authors: Leigh Ann Wilson, Melanie Borrego
Abstract:
The purpose of institutional learning outcomes (ILOs) is to assess what students across the university know and what they do not. The issue is gathering this information in a systematic and usable way. This presentation will explain how one institution has engineered this process for both student success and maximum faculty curriculum and course design input. At Brandman University, there are three levels of learning outcomes: course, program, and institutional. Institutional Learning Outcomes (ILOs) are mapped to specific courses. Faculty course developers write the signature assignments (SAs) in alignment with the Institutional Learning Outcomes for each course. These SAs use a specific rubric that is applied consistently by every section and every instructor. Each year, the 12-member General Education Team (GET), as a part of their work, conducts the calibration and assessment of the university-wide SAs and the related rubrics for one or two of the five ILOs. GET members, who are senior faculty and administrators who represent each of the university's schools, lead the calibration meetings. Specifically, calibration is a process designed to ensure the accuracy and reliability of evaluating signature assignments by working with peer faculty to interpret rubrics and compare scoring. These calibration meetings include the full time and adjunct faculty members who teach the course to ensure consensus on the application of the rubric. Each calibration session is chaired by a GET representative as well as the course custodian/contact where the ILO signature assignment resides. The overall calibration process GET follows includes multiple steps, such as: contacting and inviting relevant faculty members to participate; organizing and hosting calibration sessions; and reviewing and discussing at least 10 samples of student work from class sections during the previous academic year, for each applicable signature assignment. Conversely, the commitment for calibration teams consist of attending two virtual meetings lasting up to three hours in duration. The first meeting focuses on interpreting the rubric, and the second meeting involves comparing scores for sample work and sharing feedback about the rubric and assignment. Next, participants are expected to follow all directions provided and participate actively, and respond to scheduling requests and other emails within 72 hours. The virtual meetings are recorded for future institutional use. Adjunct faculty are paid a small stipend after participating in both calibration meetings. Full time faculty can use this work on their annual faculty report for "internal service" credit.Keywords: assessment, assurance of learning, course design, institutional learning outcomes, rubrics, signature assignments
Procedia PDF Downloads 2801137 Flotation of Rare Earth Oxides from Iron-Oxide Silicate Rich Tailings Using Fatty Acids
Authors: George B. Abaka-Wood, Massimiliano Zanin, Jonas Addai-Mensah, William Skinner
Abstract:
The versatility of froth flotation has made it vital in the beneficiation of rare earth elements minerals from either high or low-grade ores. There has been a significant increase in the quantity of iron oxide silicate-rich tailings generated from the extraction of primary commodities such as copper and gold in Australia, which have been identified to contain very low-grade rare earth oxides (≤ 1%). There is a vast knowledge gap in the beneficiation of rare earth oxides from such tailings. The aim of this research is to investigate the feasibility of using fatty acids as collectors for the flotation recovery and upgrade of rare earth oxides from selected iron-oxide silicate-rich tailings. Two forms of fatty acid collectors (oleic acid and sodium oleate) were tested in this investigation. Flotation tests were carried out using a 1.2 L Denver D-12 cell. The effects of pulp pH, fatty acid dosage, particle size distribution (-150 +75 µm, -75 +38 µm and -38 µm) and conventional depressants (sodium silicate and starch) dosage on flotation recovery of rare earth oxides were investigated. A comparison of the flotation results indicated that sodium oleate was the more efficient fatty acid for rare earth oxides flotation at all the pulp pH investigated. The flotation performance was found to be particle size-dependent. Both sodium silicate and starch were unselective in decreasing the recovery of iron oxides and silicate minerals, respectively with the corresponding decrease in rare earth oxides recovery. Generally, iron oxides and silicate minerals formed the substantial fraction of the flotation concentrates obtained, both in the absence and presence of depressants, resulting in a generally low rare earth oxides upgrade, even though rare earth oxides recoveries were high. The flotation tests carried out on the tailings sample suggest the feasibility of rare earth oxides recovery using fatty acids, although particle size distribution and minerals liberation are key limiting factors in achieving selective rare earth oxides upgrade.Keywords: depressants, flotation, oleic acid, sodium oleate
Procedia PDF Downloads 1891136 Healthcare-SignNet: Advanced Video Classification for Medical Sign Language Recognition Using CNN and RNN Models
Authors: Chithra A. V., Somoshree Datta, Sandeep Nithyanandan
Abstract:
Sign Language Recognition (SLR) is the process of interpreting and translating sign language into spoken or written language using technological systems. It involves recognizing hand gestures, facial expressions, and body movements that makeup sign language communication. The primary goal of SLR is to facilitate communication between hearing- and speech-impaired communities and those who do not understand sign language. Due to the increased awareness and greater recognition of the rights and needs of the hearing- and speech-impaired community, sign language recognition has gained significant importance over the past 10 years. Technological advancements in the fields of Artificial Intelligence and Machine Learning have made it more practical and feasible to create accurate SLR systems. This paper presents a distinct approach to SLR by framing it as a video classification problem using Deep Learning (DL), whereby a combination of Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) has been used. This research targets the integration of sign language recognition into healthcare settings, aiming to improve communication between medical professionals and patients with hearing impairments. The spatial features from each video frame are extracted using a CNN, which captures essential elements such as hand shapes, movements, and facial expressions. These features are then fed into an RNN network that learns the temporal dependencies and patterns inherent in sign language sequences. The INCLUDE dataset has been enhanced with more videos from the healthcare domain and the model is evaluated on the same. Our model achieves 91% accuracy, representing state-of-the-art performance in this domain. The results highlight the effectiveness of treating SLR as a video classification task with the CNN-RNN architecture. This approach not only improves recognition accuracy but also offers a scalable solution for real-time SLR applications, significantly advancing the field of accessible communication technologies.Keywords: sign language recognition, deep learning, convolution neural network, recurrent neural network
Procedia PDF Downloads 281135 Nanotechnology in Construction as a Building Security
Authors: Hanan Fayez Hussein
Abstract:
‘Due to increasing environmental challenges and security problems in the world such as global warming, storms, and terrorism’, humans have discovered new technologies and new materials in order to program daily life. As providing physical and psychological security is one of the primary functions of architecture, so in order to provide security, building must prevents unauthorized entry and harm to occupant and reduce the threat of attack by making building less attractive targets by new technologies such as; Nanotechnology, which has emerged as a major science and technology focus of the 21st century and will be the next industrial revolution. Nanotechnology is control of the properties of matter, and it deals with structures of the size 100 nanometers or smaller in at least one dimension and has wide application in various fields. The construction and architecture sectors were among the first to be identified as a promising application area for nanotechnology. The advantages of using nanomaterials in construction are enormous, and promises heighten building security by utilizing the strength of building materials to make our buildings more secure and get smart home. Access barriers such as wall and windows could incorporate stronger materials benefiting from nano-reinforcement utilizing nanotubes and nano composites to act as protective cover. Carbon nanotubes, as one of nanotechnology application, can be designed up to 250 times stronger than steel. Nano-enabled devices and materials offer both enhanced and, in some cases, completely new defence systems. In the addition, the small amount of carbon nanoparticles to the construction materials such as; cement, concrete, wood, glass, gypson, and steel can make these materials act as defence elements. This paper highlights the fact that nanotechnology can impact the future global security and how building’s envelop can act as a defensive cover for the building and can be resistance to any threats can attack it. Then focus on its effect on construction materials such as; Concrete can obtain by nanoadditives excellent mechanical, chemical, and physical properties with less material, which can acts as a precautionary shield to the building.Keywords: nanomaterial, global warming, building security, smart homes
Procedia PDF Downloads 821134 Energy Trading for Cooperative Microgrids with Renewable Energy Resources
Authors: Ziaullah, Shah Wahab Ali
Abstract:
Micro-grid equipped with heterogeneous energy resources present the idea of small scale distributed energy management (DEM). DEM helps in minimizing the transmission and operation costs, power management and peak load demands. Micro-grids are collections of small, independent controllable power-generating units and renewable energy resources. Micro-grids also motivate to enable active customer participation by giving accessibility of real-time information and control to the customer. The capability of fast restoration against faulty situation, integration of renewable energy resources and Information and Communication Technologies (ICT) make micro-grid as an ideal system for distributed power systems. Micro-grids can have a bank of energy storage devices. The energy management system of micro-grid can perform real-time energy forecasting of renewable resources, energy storage elements and controllable loads in making proper short-term scheduling to minimize total operating costs. We present a review of existing micro-grids optimization objectives/goals, constraints, solution approaches and tools used in micro-grids for energy management. Cost-benefit analysis of micro-grid reveals that cooperation among different micro-grids can play a vital role in the reduction of import energy cost and system stability. Cooperative micro-grids energy trading is an approach to electrical distribution energy resources that allows local energy demands more control over the optimization of power resources and uses. Cooperation among different micro-grids brings the interconnectivity and power trading issues. According to the literature, it shows that open area of research is available for cooperative micro-grids energy trading. In this paper, we proposed and formulated the efficient energy management/trading module for interconnected micro-grids. It is believed that this research will open new directions in future for energy trading in cooperative micro-grids/interconnected micro-grids.Keywords: distributed energy management, information and communication technologies, microgrid, energy management
Procedia PDF Downloads 3751133 Technology for Good: Deploying Artificial Intelligence to Analyze Participant Response to Anti-Trafficking Education
Authors: Ray Bryant
Abstract:
3Strands Global Foundation (3SGF), a non-profit with a mission to mobilize communities to combat human trafficking through prevention education and reintegration programs, launched a groundbreaking study that calls out the usage and benefits of artificial intelligence in the war against human trafficking. Having gathered more than 30,000 stories from counselors and school staff who have gone through its PROTECT Prevention Education program, 3SGF sought to develop a methodology to measure the effectiveness of the training, which helps educators and school staff identify physical signs and behaviors indicating a student is being victimized. The program further illustrates how to recognize and respond to trauma and teaches the steps to take to report human trafficking, as well as how to connect victims with the proper professionals. 3SGF partnered with Levity, a leader in no-code Artificial Intelligence (AI) automation, to create the research study utilizing natural language processing, a branch of artificial intelligence, to measure the effectiveness of their prevention education program. By applying the logic created for the study, the platform analyzed and categorized each story. If the story, directly from the educator, demonstrated one or more of the desired outcomes; Increased Awareness, Increased Knowledge, or Intended Behavior Change, a label was applied. The system then added a confidence level for each identified label. The study results were generated with a 99% confidence level. Preliminary results show that of the 30,000 stories gathered, it became overwhelmingly clear that a significant majority of the participants now have increased awareness of the issue, demonstrated better knowledge of how to help prevent the crime, and expressed an intention to change how they approach what they do daily. In addition, it was observed that approximately 30% of the stories involved comments by educators expressing they wish they’d had this knowledge sooner as they can think of many students they would have been able to help. Objectives Of Research: To solve the problem of needing to analyze and accurately categorize more than 30,000 data points of participant feedback in order to evaluate the success of a human trafficking prevention program by using AI and Natural Language Processing. Methodologies Used: In conjunction with our strategic partner, Levity, we have created our own NLP analysis engine specific to our problem. Contributions To Research: The intersection of AI and human rights and how to utilize technology to combat human trafficking.Keywords: AI, technology, human trafficking, prevention
Procedia PDF Downloads 591132 Purchasing Decision-Making in Supply Chain Management: A Bibliometric Analysis
Authors: Ahlem Dhahri, Waleed Omri, Audrey Becuwe, Abdelwahed Omri
Abstract:
In industrial processes, decision-making ranges across different scales, from process control to supply chain management. The purchasing decision-making process in the supply chain is presently gaining more attention as a critical contributor to the company's strategic success. Given the scarcity of thorough summaries in the prior studies, this bibliometric analysis aims to adopt a meticulous approach to achieve quantitative knowledge on the constantly evolving subject of purchasing decision-making in supply chain management. Through bibliometric analysis, we examine a sample of 358 peer-reviewed articles from the Scopus database. VOSviewer and Gephi software were employed to analyze, combine, and visualize the data. Data analytic techniques, including citation network, page-rank analysis, co-citation, and publication trends, have been used to identify influential works and outline the discipline's intellectual structure. The outcomes of this descriptive analysis highlight the most prominent articles, authors, journals, and countries based on their citations and publications. The findings from the research illustrate an increase in the number of publications, exhibiting a slightly growing trend in this field. Co-citation analysis coupled with content analysis of the most cited articles identified five research themes mentioned as follows integrating sustainability into the supplier selection process, supplier selection under disruption risks assessment and mitigation strategies, Fuzzy MCDM approaches for supplier evaluation and selection, purchasing decision in vendor problems, decision-making techniques in supplier selection and order lot sizing problems. With the help of a graphic timeline, this exhaustive map of the field illustrates a visual representation of the evolution of publications that demonstrate a gradual shift from research interest in vendor selection problems to integrating sustainability in the supplier selection process. These clusters offer insights into a wide variety of purchasing methods and conceptual frameworks that have emerged; however, they have not been validated empirically. The findings suggest that future research would emerge with a greater depth of practical and empirical analysis to enrich the theories. These outcomes provide a powerful road map for further study in this area.Keywords: bibliometric analysis, citation analysis, co-citation, Gephi, network analysis, purchasing, SCM, VOSviewer
Procedia PDF Downloads 851131 Parallel Fuzzy Rough Support Vector Machine for Data Classification in Cloud Environment
Authors: Arindam Chaudhuri
Abstract:
Classification of data has been actively used for most effective and efficient means of conveying knowledge and information to users. The prima face has always been upon techniques for extracting useful knowledge from data such that returns are maximized. With emergence of huge datasets the existing classification techniques often fail to produce desirable results. The challenge lies in analyzing and understanding characteristics of massive data sets by retrieving useful geometric and statistical patterns. We propose a supervised parallel fuzzy rough support vector machine (PFRSVM) for data classification in cloud environment. The classification is performed by PFRSVM using hyperbolic tangent kernel. The fuzzy rough set model takes care of sensitiveness of noisy samples and handles impreciseness in training samples bringing robustness to results. The membership function is function of center and radius of each class in feature space and is represented with kernel. It plays an important role towards sampling the decision surface. The success of PFRSVM is governed by choosing appropriate parameter values. The training samples are either linear or nonlinear separable. The different input points make unique contributions to decision surface. The algorithm is parallelized with a view to reduce training times. The system is built on support vector machine library using Hadoop implementation of MapReduce. The algorithm is tested on large data sets to check its feasibility and convergence. The performance of classifier is also assessed in terms of number of support vectors. The challenges encountered towards implementing big data classification in machine learning frameworks are also discussed. The experiments are done on the cloud environment available at University of Technology and Management, India. The results are illustrated for Gaussian RBF and Bayesian kernels. The effect of variability in prediction and generalization of PFRSVM is examined with respect to values of parameter C. It effectively resolves outliers’ effects, imbalance and overlapping class problems, normalizes to unseen data and relaxes dependency between features and labels. The average classification accuracy for PFRSVM is better than other classifiers for both Gaussian RBF and Bayesian kernels. The experimental results on both synthetic and real data sets clearly demonstrate the superiority of the proposed technique.Keywords: FRSVM, Hadoop, MapReduce, PFRSVM
Procedia PDF Downloads 4901130 Environmental Resilience in Sustainability Outcomes of Spatial-Economic Model Structure on the Topology of Construction Ecology
Authors: Moustafa Osman Mohammed
Abstract:
The resilient and sustainable of construction ecology is essential to world’s socio-economic development. Environmental resilience is crucial in relating construction ecology to topology of spatial-economic model. Sustainability of spatial-economic model gives attention to green business to comply with Earth’s System for naturally exchange patterns of ecosystems. The systems ecology has consistent and periodic cycles to preserve energy and materials flow in Earth’s System. When model structure is influencing communication of internal and external features in system networks, it postulated the valence of the first-level spatial outcomes (i.e., project compatibility success). These instrumentalities are dependent on second-level outcomes (i.e., participant security satisfaction). These outcomes of model are based on measuring database efficiency, from 2015 to 2025. The model topology has state-of-the-art in value-orientation impact and correspond complexity of sustainability issues (e.g., build a consistent database necessary to approach spatial structure; construct the spatial-economic model; develop a set of sustainability indicators associated with model; allow quantification of social, economic and environmental impact; use the value-orientation as a set of important sustainability policy measures), and demonstrate environmental resilience. The model is managing and developing schemes from perspective of multiple sources pollutants through the input–output criteria. These criteria are evaluated the external insertions effects to conduct Monte Carlo simulations and analysis for using matrices in a unique spatial structure. The balance “equilibrium patterns” such as collective biosphere features, has a composite index of the distributed feedback flows. These feedback flows have a dynamic structure with physical and chemical properties for gradual prolong of incremental patterns. While these structures argue from system ecology, static loads are not decisive from an artistic/architectural perspective. The popularity of system resilience, in the systems structure related to ecology has not been achieved without the generation of confusion and vagueness. However, this topic is relevant to forecast future scenarios where industrial regions will need to keep on dealing with the impact of relative environmental deviations. The model attempts to unify analytic and analogical structure of urban environments using database software to integrate sustainability outcomes where the process based on systems topology of construction ecology.Keywords: system ecology, construction ecology, industrial ecology, spatial-economic model, systems topology
Procedia PDF Downloads 201129 Time of Week Intensity Estimation from Interval Censored Data with Application to Police Patrol Planning
Authors: Jiahao Tian, Michael D. Porter
Abstract:
Law enforcement agencies are tasked with crime prevention and crime reduction under limited resources. Having an accurate temporal estimate of the crime rate would be valuable to achieve such a goal. However, estimation is usually complicated by the interval-censored nature of crime data. We cast the problem of intensity estimation as a Poisson regression using an EM algorithm to estimate the parameters. Two special penalties are added that provide smoothness over the time of day and day of the week. This approach presented here provides accurate intensity estimates and can also uncover day-of-week clusters that share the same intensity patterns. Anticipating where and when crimes might occur is a key element to successful policing strategies. However, this task is complicated by the presence of interval-censored data. The censored data refers to the type of data that the event time is only known to lie within an interval instead of being observed exactly. This type of data is prevailing in the field of criminology because of the absence of victims for certain types of crime. Despite its importance, the research in temporal analysis of crime has lagged behind the spatial component. Inspired by the success of solving crime-related problems with a statistical approach, we propose a statistical model for the temporal intensity estimation of crime with censored data. The model is built on Poisson regression and has special penalty terms added to the likelihood. An EM algorithm was derived to obtain maximum likelihood estimates, and the resulting model shows superior performance to the competing model. Our research is in line with the smart policing initiative (SPI) proposed by the Bureau Justice of Assistance (BJA) as an effort to support law enforcement agencies in building evidence-based, data-driven law enforcement tactics. The goal is to identify strategic approaches that are effective in crime prevention and reduction. In our case, we allow agencies to deploy their resources for a relatively short period of time to achieve the maximum level of crime reduction. By analyzing a particular area within cities where data are available, our proposed approach could not only provide an accurate estimate of intensities for the time unit considered but a time-variation crime incidence pattern. Both will be helpful in the allocation of limited resources by either improving the existing patrol plan with the understanding of the discovery of the day of week cluster or supporting extra resources available.Keywords: cluster detection, EM algorithm, interval censoring, intensity estimation
Procedia PDF Downloads 661128 Expression of Micro-RNA268 in Zinc Deficient Rice
Authors: Sobia Shafqat, Saeed Ahmad Qaisrani
Abstract:
MicroRNAs play an essential role in the regulation and development of all processes in most eukaryotes because of their prospective part as mediators controlling cell growth and differentiation towards the exact position of RNAs response in plants under biotic and abiotic factors or stressors. In a few cases, Zn is oblivious poisonous for plants due to its heavy metal status. Some other metals are extremely toxic, like Cd, Hg, and Pb, but these elements require in rice for the programming of genes under abiotic stress resembling Zn stress when micro RNAs268 was importantly introduced in rice. The micro RNAs overexpressed in transgenic plants with an accumulation of a large amount of melanin dialdehyde, hydrogen peroxide, and an excessive quantity of Zn in the seedlings stage. Let out results for rice pliability under Zn stress micro RNAs act as negative controllers. But the role of micro RNA268 act as a modulator in different ecological condition. It has been explained clearly with a long understanding of the role of micro RNA268 under stress conditions; pliability and practically showed outcome to increase plant sufferance under Zn stress because micro RNAs is an intervention technique for gene regulation in gene expression. The proposed study was experimented with by using genetic factors of Zn stress and toxicity effect on rice plants done at District Vehari, Pakistan. The trial was performed randomly with three replications in a complete block design (RCBD). These blocks were controlled with different concentrations of genetic factors. By overexpression of micro RNA268 rice, seedling growth was not stopped under Zn deficiency due to the accumulation of a large amount of melanin dialdehyde, hydrogen peroxide, and an excessive quantity of Zn in their seedlings. Results showed that micro RNA268 act as a negative controller under Zn stress. In the end, under stress conditions, micro RNA268 showed the necessary function in the tolerance of rice plants. The directorial work sketch gave out high agronomic applications and yield outcomes in rice with a specific amount of Zn application.Keywords: micro RNA268, zinc, rice, agronomic approach
Procedia PDF Downloads 611127 Assessment of Trace Metals Contamination in Surficial and Core Sediments from Ghannouch- Gabes Coastline, Impact of Phosphogypsum Discharge, Southeastern of Tunisia, Mediterranean Sea: Geochemical and Mineralogical Approaches
Authors: Rim Ben Amor, Myriam Abidi, Moncef Gueddari
Abstract:
The purpose of the present study is to assess the level and the distribution of CaO, SO3, Cd, Cu, Pb and Zn incore sediments of Ghannouch-Gabes coast, Gulf of Gabes, Tunisian Mediterranean coast. The XRD analyses indicate that the sediments of Ghannouch-Gabes coast are mainly composed of quartz, calcite, gypsum and fluorine reflecting the impact of the phosphate fertilizer industrial waste. The vertical distribution of surface sediments shows for all the elements analyzed, that the area located between the commercial and the fishing port of Gabes, is the most polluted zone, where the two harbors acted as barriers and limited the dispersion of phosphogypsum discharge. The abundance order of metals was found to be Zn > Cd > Cu >Pb and that the highest levels of heavy metals were found in the uppermost segment of the sediment core compared to lower depth subsurface due to a continuous input of PG release and showed that the area between the two harbor suffered from several types of pollutants compared to reference core C1, collected from non-industrialized area. The level of pollution was evaluated using contamination factor (Cf), pollution load index (PLI) and the geoaccumulation index (Igeo). The obtained results of Igeo allowed us to distinguish that the area between the commercial harbor of Ghannouch and the fishing harbor of Gabes is the most polluted where sediments are strongly contaminated for Pb, Cu and Cd. The pollution load index (PLI) of all sediments collected classified them as "polluted". According to contamination factor (Cf), the sediments can be considered as ‘considerable’ to ‘very high’ contaminated for Pb, ‘very high to moderate’ for Cd, ‘ moderate’ for Zn, between ‘moderate’ and ‘considerable’ for Cu. Statistical analyses show that heavy metals, fluoride, calcium and sulphate are resulting from the same anthropogenic origin. The metallic pollution status of sediments of Ghanouch -Gabes coast is worrying and requires a serious intervention.Keywords: trace metals, phosphogypsum, core sediments, accumulation factor, contamination factor
Procedia PDF Downloads 1411126 A New Co(II) Metal Complex Template with 4-dimethylaminopyridine Organic Cation: Structural, Hirshfeld Surface, Phase Transition, Electrical Study and Dielectric Behavior
Authors: Mohamed dammak
Abstract:
Great attention has been paid to the design and synthesis of novel organic-inorganic compounds in recent decades because of their structural variety and the large diversity of atomic arrangements. In this work, the structure for the novel dimethyl aminopyridine tetrachlorocobaltate (C₇H₁₁N₂)₂CoCl₄ prepared by the slow evaporation method at room temperature has been successfully discussed. The X-ray diffraction results indicate that the hybrid material has a triclinic structure with a P space group and features a 0D structure containing isolated distorted [CoCl₄]2- tetrahedra interposed between [C7H11N²⁻]+ cations forming planes perpendicular to the c axis at z = 0 and z = ½. The effect of the synthesis conditions and the reactants used, the interactions between the cationic planes, and the isolated [CoCl4]2- tetrahedra are employing N-H...Cl and C-H…Cl hydrogen bonding contacts. The inspection of the Hirshfeld surface analysis helps to discuss the strength of hydrogen bonds and to quantify the inter-contacts. A phase transition was discovered by thermal analysis at 390 K, and comprehensive dielectric research was reported, showing a good agreement with thermal data. Impedance spectroscopy measurements were used to study the electrical and dielectric characteristics over a wide range of frequencies and temperatures, 40 Hz–10 MHz and 313–483 K, respectively. The Nyquist plot (Z" versus Z') from the complex impedance spectrum revealed semicircular arcs described by a Cole-Cole model. An electrical circuit consisting of a link of grain and grain boundary elements is employed. The real and imaginary parts of dielectric permittivity, as well as tg(δ) of (C₇H₁₁N₂)₂CoCl₄ at different frequencies, reveal a distribution of relaxation times. The presence of grain and grain boundaries is confirmed by the modulus investigations. Electric and dielectric analyses highlight the good protonic conduction of this material.Keywords: organic-inorganic, phase transitions, complex impedance, protonic conduction, dielectric analysis
Procedia PDF Downloads 851125 The Impact of Urbanisation on Sediment Concentration of Ginzo River in Katsina City, Katsina State, Nigeria
Authors: Ahmed A. Lugard, Mohammed A. Aliyu
Abstract:
This paper studied the influence of urban development and its accompanied land surface transformation on sediment concentration of a natural flowing Ginzo river across the city of Katsina. An opposite twin river known as Tille river, which is less urbanized, was used to compare the result of the sediment concentration of the Ginzo River in order to ascertain the consequences of the urban area on impacting the sediment concentration. An instrument called USP 61 point integrating cable way sampler described by Gregory and walling (1973), was used to collect the suspended sediment samples in the wet season months of June, July, August and September. The result obtained in the study shows that only the sample collected at the peripheral site of the city, which is mostly farmland areas resembles the results in the four sites of Tille river, which is the reference stream in the study. It was found to be only + 10% different from one another, while at the other three sites of the Ginzo which are highly urbanized the disparity ranges from 35-45% less than what are obtained at the four sites of Tille River. In the generalized assessment, the t-distribution result applied to the two set of data shows that there is a significant difference between the sediment concentration of urbanized River Ginzo and that of less urbanized River Tille. The study further discovered that the less sediment concentration found in urbanized River Ginzo is attributed to concretization of surfaced, tarred roads, concretized channeling of segments of the river including the river bed and reserved open grassland areas, all within the catchments. The study therefore concludes that urbanization affect not only the hydrology of an urbanized river basin, but also the sediment concentration which is a significant aspect of its geomorphology. This world certainly affects the flood plain of the basin at a certain point which might be a suitable land for cultivation. It is recommended here that further studies on the impact of urbanization on River Basins should focus on all elements of geomorphology as it has been on hydrology. This would make the work rather complete as the two disciplines are inseparable from each other. The authorities concern should also trigger a more proper environmental and land use management policies to arrest the menace of land degradation and related episodic events.Keywords: environment, infiltration, river, urbanization
Procedia PDF Downloads 3181124 Series Connected GaN Resonant Tunneling Diodes for Multiple-Valued Logic
Authors: Fang Liu, JunShuai Xue, JiaJia Yao, XueYan Yang, ZuMao Li, GuanLin Wu, HePeng Zhang, ZhiPeng Sun
Abstract:
III-Nitride resonant tunneling diode (RTD) is one of the most promising candidates for multiple-valued logic (MVL) elements. Here, we report a monolithic integration of GaN resonant tunneling diodes to realize multiple negative differential resistance (NDR) regions for MVL application. GaN RTDs, composed of a 2 nm quantum well embedded in two 1 nm quantum barriers, are grown by plasma-assisted molecular beam epitaxy on free-standing c-plane GaN substrates. Negative differential resistance characteristic with a peak current density of 178 kA/cm² in conjunction with a peak-to-valley current ratio (PVCR) of 2.07 is observed. Statistical properties exhibit high consistency showing a peak current density standard deviation of almost 1%, laying the foundation for the monolithic integration. After complete electrical isolation, two diodes of the designed same area are connected in series. By solving the Poisson equation and Schrodinger equation in one dimension, the energy band structure is calculated to explain the transport mechanism of the differential negative resistance phenomenon. Resonant tunneling events in a sequence of the series-connected RTD pair (SCRTD) form multiple NDR regions with nearly equal peak current, obtaining three stable operating states corresponding to ternary logic. A frequency multiplier circuit achieved using this integration is demonstrated, attesting to the robustness of this multiple peaks feature. This article presents a monolithic integration of SCRTD with multiple NDR regions driven by the resonant tunneling mechanism, which can be applied to a multiple-valued logic field, promising a fast operation speed and a great reduction of circuit complexity and demonstrating a new solution for nitride devices to break through the limitations of binary logic.Keywords: GaN resonant tunneling diode, multiple-valued logic system, frequency multiplier, negative differential resistance, peak-to-valley current ratio
Procedia PDF Downloads 811123 Investigation of the Opinions and Recommendations of Participants Related to Operating Room Nursing Certified Course Program
Authors: Zehra Gencel Efe, Fatma Susam Ozsayın, Satı Tas
Abstract:
Background and Aim: It is not possible to teach all the knowledge related to operating room nursing in the nursing education process. Certified courses are organized by the Ministry of Health to compensate the lack of postgraduate training and the theoretical and practical training needs of working nurses. In this study; It is aimed to investigate the participants’ opinions and recommendations attending the certified course of operating room nursing that organized in İKCU AtaturkTraining and Research Hospital. Method: Two operating room nursing courses were organized in 2016. The 1st Operating Room Nursing Certified Course Program was organized between March 07, 2016 and April 6, 2016and the 2nd Operating Room Nursing Certified Course Program was organized between 07 November 2016 - 06 December 2016 at the İKCU Ataturk Training and Research Hospital. The first program was accepted for 29 participants, the second program was accepted for 30 participants. In the collection of the data, the 'Operating Room Nursing Certified Training Program Evaluation Form', 'Operating Room Nursing Certified Training Program Theoretical Training Evaluation Form' were used. Three point Likert-type scale is used for responses in the 'Operating Room Nursing Certified Training Program Evaluation Form’ (1=verygood, 2=good, 3=poor). Data is collected in five areas related to training program, operation room practice, communication, responsibility, experiences of learning. Four point Likert-type scale is used for responses in the 'Operating Room Nursing Certified Training Program Theoretical Training Evaluation Form' (1=verysatisfied, 2=quitesatisfied, 3=satisfied, 4=dissatisfied). Data is collected in two areas include presentation and content. Data were analyzed with SPSS 16 program. Findings and Conclusion: It was found that 93,22% of participants were female in addition, 62,7% had bachelor degree. It was seen that 33,87% of the work group had 1-5 years of experience in their field. It was found that; 88% of trainees participating in the first group to the operating room nursing-certified course program stated the training program was very good, 12% of them stated the training program was good. Nobody was signed the ‘poor’ choice. 81% of the trainees who participated in the 2nd group to the operating room nursing-certified course program stated the training program was very good, 19% of them stated the training program was good. Nobody was signed the ‘poor’ choice. It was found that there was no meaningful difference between the achievement ratios of the trainees and the learning status of the trainees when compared with the t test in the groups with success level of the operating room nursing certified course program according to the learning status of the participants (p ˃ 0,05). The trainees noted that the course was satisfied with theoretical and practical steps but the support services (lunch, coffee breaks etc.) were in adequate.Keywords: certified courses, nursing certified courses, operating room nursing, training program
Procedia PDF Downloads 2161122 A Readiness Framework for Digital Innovation in Education: The Context of Academics and Policymakers in Higher Institutions of Learning to Assess the Preparedness of Their Institutions to Adopt and Incorporate Digital Innovation
Authors: Lufungula Osembe
Abstract:
The field of education has witnessed advances in technology and digital transformation. The methods of teaching have undergone significant changes in recent years, resulting in effects on various areas such as pedagogies, curriculum design, personalized teaching, gamification, data analytics, cloud-based learning applications, artificial intelligence tools, advanced plug-ins in LMS, and the emergence of multimedia creation and design. The field of education has not been immune to the changes brought about by digital innovation in recent years, similar to other fields such as engineering, health, science, and technology. There is a need to look at the variables/elements that digital innovation brings to education and develop a framework for higher institutions of learning to assess their readiness to create a viable environment for digital innovation to be successfully adopted. Given the potential benefits of digital innovation in education, it is essential to develop a framework that can assist academics and policymakers in higher institutions of learning to evaluate the effectiveness of adopting and adapting to the evolving landscape of digital innovation in education. The primary research question addressed in this study is to establish the preparedness of higher institutions of learning to adopt and adapt to the evolving landscape of digital innovation. This study follows a Design Science Research (DSR) paradigm to develop a framework for academics and policymakers in higher institutions of learning to evaluate the readiness of their institutions to adopt digital innovation in education. The Design Science Research paradigm is proposed to aid in developing a readiness framework for digital innovation in education. This study intends to follow the Design Science Research (DSR) methodology, which includes problem awareness, suggestion, development, evaluation, and conclusion. One of the major contributions of this study will be the development of the framework for digital innovation in education. Given the various opportunities offered by digital innovation in recent years, the need to create a readiness framework for digital innovation will play a crucial role in guiding academics and policymakers in their quest to align with emerging technologies facilitated by digital innovation in education.Keywords: digital innovation, DSR, education, opportunities, research
Procedia PDF Downloads 691121 Development of a Process Method to Manufacture Spreads from Powder Hardstock
Authors: Phakamani Xaba, Robert Huberts, Bilainu Oboirien
Abstract:
It has been over 200 years since margarine was discovered and manufactured using liquid oil, liquified hardstock oils and other oil phase & aqueous phase ingredients. Henry W. Bradley first used vegetable oils in liquid state and around 1871, since then; spreads have been traditionally manufactured using liquified oils. The main objective of this study was to develop a process method to produce spreads using spray dried hardstock fat powders as a structing fats in place of current liquid structuring fats. A high shear mixing system was used to condition the fat phase and the aqueous phase was prepared separately. Using a single scraped surface heat exchanger and pin stirrer, margarine was produced. The process method was developed for to produce spreads with 40%, 50% and 60% fat . The developed method was divided into three steps. In the first step, fat powders were conditioned by melting and dissolving them into liquid oils. The liquified portion of the oils were at 65 °C, whilst the spray dried fat powder was at 25 °C. The two were mixed using a mixing vessel at 900 rpm for 4 minutes. The rest of the ingredients i.e., lecithin, colorant, vitamins & flavours were added at ambient conditions to complete the fat/ oil phase. The water phase was prepared separately by mixing salt, water, preservative, acidifier in the mixing tank. Milk was also separately prepared by pasteurizing it at 79°C prior to feeding it into the aqueous phase. All the water phase contents were chilled to 8 °C. The oil phase and water phase were mixed in a tank, then fed into a single scraped surface heat exchanger. After the scraped surface heat exchanger, the emulsion was fed in a pin stirrer to work the formed crystals and produce margarine. The margarine produced using the developed process had fat levels of 40%, 50% and 60%. The margarine passed all the qualitative, stability, and taste assessments. The scores were 6/10, 7/10 & 7.5/10 for the 40%, 50% & 60% fat spreads, respectively. The success of the trials brought about differentiated knowledge on how to manufacture spreads using non micronized spray dried fat powders as hardstock. Manufacturers do not need to store structuring fats at 80-90°C and even high in winter, instead, they can adapt their processes to use fat powders which need to be stored at 25 °C. The developed process method used one scrape surface heat exchanger instead of the four to five currently used in votator based plants. The use of a single scraped surface heat exchanger translated to about 61% energy savings i.e., 23 kW per ton of product. Furthermore, it was found that the energy saved by implementing separate pasteurization was calculated to be 6.5 kW per ton of product produced.Keywords: margarine emulsion, votator technology, margarine processing, scraped sur, fat powders
Procedia PDF Downloads 901120 In vitro Characterization of Mice Bone Microstructural Changes by Low-Field and High-Field Nuclear Magnetic Resonance
Authors: Q. Ni, J. A. Serna, D. Holland, X. Wang
Abstract:
The objective of this study is to develop Nuclear Magnetic Resonance (NMR) techniques to enhance bone related research applied on normal and disuse (Biglycan knockout) mice bone in vitro by using both low-field and high-field NMR simultaneously. It is known that the total amplitude of T₂ relaxation envelopes, measured by the Carr-Purcell-Meiboom-Gill NMR spin echo train (CPMG), is a representation of the liquid phase inside the pores. Therefore, the NMR CPMG magnetization amplitude can be transferred to the volume of water after calibration with the NMR signal amplitude of the known volume of the selected water. In this study, the distribution of mobile water, porosity that can be determined by using low-field (20 MHz) CPMG relaxation technique, and the pore size distributions can be determined by a computational inversion relaxation method. It is also known that the total proton intensity of magnetization from the NMR free induction decay (FID) signal is due to the water present inside the pores (mobile water), the water that has undergone hydration with the bone (bound water), and the protons in the collagen and mineral matter (solid-like protons). Therefore, the components of total mobile and bound water within bone that can be determined by low-field NMR free induction decay technique. Furthermore, the bound water in solid phase (mineral and organic constituents), especially, the dominated component of calcium hydroxyapatite (Ca₁₀(OH)₂(PO₄)₆) can be determined by using high-field (400 MHz) magic angle spinning (MAS) NMR. With MAS technique reducing NMR spectral linewidth inhomogeneous broadening and susceptibility broadening of liquid-solid mix, in particular, we can conduct further research into the ¹H and ³¹P elements and environments of bone materials to identify the locations of bound water such as OH- group within minerals and bone architecture. We hypothesize that with low-field and high-field magic angle spinning NMR can provide a more complete interpretation of water distribution, particularly, in bound water, and these data are important to access bone quality and predict the mechanical behavior of bone.Keywords: bone, mice bone, NMR, water in bone
Procedia PDF Downloads 1771119 Virtual Team Performance: A Transactive Memory System Perspective
Authors: Belbaly Nassim
Abstract:
Virtual teams (VT) initiatives, in which teams are geographically dispersed and communicate via modern computer-driven technologies, have attracted increasing attention from researchers and professionals. The growing need to examine how to balance and optimize VT is particularly important given the exposure experienced by companies when their employees encounter globalization and decentralization pressures to monitor VT performance. Hence, organization is regularly limited due to misalignment between the behavioral capabilities of the team’s dispersed competences and knowledge capabilities and how trust issues interplay and influence these VT dimensions and the effects of such exchanges. In fact, the future success of business depends on the extent to which VTs are managing efficiently their dispersed expertise, skills and knowledge to stimulate VT creativity. Transactive memory system (TMS) may enhance VT creativity using its three dimensons: knowledge specialization, credibility and knowledge coordination. TMS can be understood as a composition of both a structural component residing of individual knowledge and a set of communication processes among individuals. The individual knowledge is shared while being retrieved, applied and the learning is coordinated. TMS is driven by the central concept that the system is built on the distinction between internal and external memory encoding. A VT learns something new and catalogs it in memory for future retrieval and use. TMS uses the role of information technology to explain VT behaviors by offering VT members the possibility to encode, store, and retrieve information. TMS considers the members of a team as a processing system in which the location of expertise both enhances knowledge coordination and builds trust among members over time. We build on TMS dimensions to hypothesize the effects of specialization, coordination, and credibility on VT creativity. In fact, VTs consist of dispersed expertise, skills and knowledge that can positively enhance coordination and collaboration. Ultimately, this team composition may lead to recognition of both who has expertise and where that expertise is located; over time, the team composition may also build trust among VT members over time developing the ability to coordinate their knowledge which can stimulate creativity. We also assess the reciprocal relationship between TMS dimensions and VT creativity. We wish to use TMS to provide researchers with a theoretically driven model that is empirically validated through survey evidence. We propose that TMS provides a new way to enhance and balance VT creativity. This study also provides researchers insight into the use of TMS to influence positively VT creativity. In addition to our research contributions, we provide several managerial insights into how TMS components can be used to increase performance within dispersed VTs.Keywords: virtual team creativity, transactive memory systems, specialization, credibility, coordination
Procedia PDF Downloads 1741118 Influence of Cobalt Incorporation on the Structure and Properties of SOL-Gel Derived Mesoporous Bioglass Nanoparticles
Authors: Ahmed El-Fiqi, Hae-Won Kim
Abstract:
Incorporation of therapeutic elements such as Sr, Cu and Co into bioglass structure and their release as ions is considered as one of the promising approaches to enhance cellular responses, e.g., osteogenesis and angiogenesis. Here, cobalt as angiogenesis promoter has been incorporated (at 0, 1 and 4 mol%) into sol-gel derived calcium silicate mesoporous bioglass nanoparticles. The composition and structure of cobalt-free (CFN) and cobalt-doped (CDN) mesoporous bioglass nanoparticles have been analyzed by X-ray fluorescence (XRF), X-ray diffraction (XRD), X-ray photoelectron spectroscopy (XPS) and Fourier-Transform Infra-red spectroscopy (FT-IR). The physicochemical properties of CFN and CDN have been investigated using high-resolution transmission electron microscopy (HR-TEM), Selected area electron diffraction (SAED), and Energy-dispersive X-ray (EDX). Furthermore, the textural properties, including specific surface area, pore-volume, and pore size, have been analyzed from N²⁻sorption analyses. Surface charges of CFN and CDN were also determined from surface zeta potential measurements. The release of ions, including Co²⁺, Ca²⁺, and SiO₄⁴⁻ has been analyzed using inductively coupled plasma atomic emission spectrometry (ICP-AES). Loading and release of diclofenac as an anti-inflammatory drug model were explored in vitro using Ultraviolet-visible spectroscopy (UV-Vis). XRD results ensured the amorphous state of CFN and CDN whereas, XRF further confirmed that their chemical compositions are very close to the designed compositions. HR-TEM analyses unveiled nanoparticles with spherical morphologies, highly mesoporous textures, and sizes in the range of 90 - 100 nm. Moreover, N²⁻ sorption analyses revealed that the nanoparticles have pores with sizes of 3.2 - 2.6 nm, pore volumes of 0.41 - 0.35 cc/g and highly surface areas in the range of 716 - 830 m²/g. High-resolution XPS analysis of Co 2p core level provided structural information about Co atomic environment and it confirmed the electronic state of Co in the glass matrix. ICP-AES analysis showed the release of therapeutic doses of Co²⁺ ions from 4% CDN up to 100 ppm within 14 days. Finally, diclofenac loading and release have ensured the drug/ion co-delivery capability of 4% CDN.Keywords: mesoporous bioactive glass, nanoparticles, cobalt ions, release
Procedia PDF Downloads 1071117 Neural Network Mechanisms Underlying the Combination Sensitivity Property in the HVC of Songbirds
Authors: Zeina Merabi, Arij Dao
Abstract:
The temporal order of information processing in the brain is an important code in many acoustic signals, including speech, music, and animal vocalizations. Despite its significance, surprisingly little is known about its underlying cellular mechanisms and network manifestations. In the songbird telencephalic nucleus HVC, a subset of neurons shows temporal combination sensitivity (TCS). These neurons show a high temporal specificity, responding differently to distinct patterns of spectral elements and their combinations. HVC neuron types include basal-ganglia-projecting HVCX, forebrain-projecting HVCRA, and interneurons (HVC¬INT), each exhibiting distinct cellular, electrophysiological and functional properties. In this work, we develop conductance-based neural network models connecting the different classes of HVC neurons via different wiring scenarios, aiming to explore possible neural mechanisms that orchestrate the combination sensitivity property exhibited by HVCX, as well as replicating in vivo firing patterns observed when TCS neurons are presented with various auditory stimuli. The ionic and synaptic currents for each class of neurons that are presented in our networks and are based on pharmacological studies, rendering our networks biologically plausible. We present for the first time several realistic scenarios in which the different types of HVC neurons can interact to produce this behavior. The different networks highlight neural mechanisms that could potentially help to explain some aspects of combination sensitivity, including 1) interplay between inhibitory interneurons’ activity and the post inhibitory firing of the HVCX neurons enabled by T-type Ca2+ and H currents, 2) temporal summation of synaptic inputs at the TCS site of opposing signals that are time-and frequency- dependent, and 3) reciprocal inhibitory and excitatory loops as a potent mechanism to encode information over many milliseconds. The result is a plausible network model characterizing auditory processing in HVC. Our next step is to test the predictions of the model.Keywords: combination sensitivity, songbirds, neural networks, spatiotemporal integration
Procedia PDF Downloads 651116 Evaluation of Cardiac Rhythm Patterns after Open Surgical Maze-Procedures from Three Years' Experiences in a Single Heart Center
Authors: J. Yan, B. Pieper, B. Bucsky, H. H. Sievers, B. Nasseri, S. A. Mohamed
Abstract:
In order to optimize the efficacy of medications, the regular follow-up with long-term continuous monitoring of heart rhythmic patterns has been facilitated since clinical introduction of cardiac implantable electronic monitoring devices (CIMD). Extensive analysis of rhythmic circadian properties is capable to disclose the distributions of arrhythmic events, which may support appropriate medication according rate-/rhythm-control strategy and minimize consequent afflictions. 348 patients (69 ± 0.5ys, male 61.8%) with predisposed atrial fibrillation (AF), undergoing primary ablating therapies combined to coronary or valve operations and secondary implantation of CIMDs, were involved and divided into 3 groups such as PAAF (paroxysmal AF) (n=99, male 68.7%), PEAF (persistent AF) (n=94, male 62.8%), and LSPEAF (long-standing persistent AF) (n=155, male 56.8%). All patients participated in three-year ambulant follow-up (3, 6, 9, 12, 18, 24, 30 and 36 months). Burdens of atrial fibrillation recurrence were assessed using cardiac monitor devices, whereby attacks frequencies and their circadian patterns were systemically analyzed. Anticoagulants and regular anti-arrhythmic medications were evaluated and the last were listed in terms of anti-rate and anti-rhythm regimens. Patients in the PEAF-group showed the least AF-burden after surgical ablating procedures compared to both of the other subtypes (p < 0.05). The AF-recurrences predominantly performed such attacks’ property as shorter than one hour, namely within 10 minutes (p < 0.05), regardless of AF-subtypes. Concerning circadian distribution of the recurrence attacks, frequent AF-attacks were mostly recorded in the morning in the PAAF-group (p < 0.05), while the patients with predisposed PEAF complained less attack-induced discomforts in the latter half of the night and the ones with LSPEAF only if they were not physically active after primary surgical ablations. Different AF-subtypes presented distinct therapeutic efficacies after appropriate surgical ablating procedures and recurrence properties in sense of circadian distribution. An optimization of medical regimen and drug dosages to maintain the therapeutic success needs more attention to detailed assessment of the long-term follow-up. Rate-control strategy plays a much more important role than rhythm-control in the ongoing follow-up examinations.Keywords: atrial fibrillation, CIMD, MAZE, rate-control, rhythm-control, rhythm patterns
Procedia PDF Downloads 1561115 Testing and Validation Stochastic Models in Epidemiology
Authors: Snigdha Sahai, Devaki Chikkavenkatappa Yellappa
Abstract:
This study outlines approaches for testing and validating stochastic models used in epidemiology, focusing on the integration and functional testing of simulation code. It details methods for combining simple functions into comprehensive simulations, distinguishing between deterministic and stochastic components, and applying tests to ensure robustness. Techniques include isolating stochastic elements, utilizing large sample sizes for validation, and handling special cases. Practical examples are provided using R code to demonstrate integration testing, handling of incorrect inputs, and special cases. The study emphasizes the importance of both functional and defensive programming to enhance code reliability and user-friendliness.Keywords: computational epidemiology, epidemiology, public health, infectious disease modeling, statistical analysis, health data analysis, disease transmission dynamics, predictive modeling in health, population health modeling, quantitative public health, random sampling simulations, randomized numerical analysis, simulation-based analysis, variance-based simulations, algorithmic disease simulation, computational public health strategies, epidemiological surveillance, disease pattern analysis, epidemic risk assessment, population-based health strategies, preventive healthcare models, infection dynamics in populations, contagion spread prediction models, survival analysis techniques, epidemiological data mining, host-pathogen interaction models, risk assessment algorithms for disease spread, decision-support systems in epidemiology, macro-level health impact simulations, socioeconomic determinants in disease spread, data-driven decision making in public health, quantitative impact assessment of health policies, biostatistical methods in population health, probability-driven health outcome predictions
Procedia PDF Downloads 71114 Co-Design of Accessible Speech Recognition for Users with Dysarthric Speech
Authors: Elizabeth Howarth, Dawn Green, Sean Connolly, Geena Vabulas, Sara Smolley
Abstract:
Through the EU Horizon 2020 Nuvoic Project, the project team recruited 70 individuals in the UK and Ireland to test the Voiceitt speech recognition app and provide user feedback to developers. The app is designed for people with dysarthric speech, to support communication with unfamiliar people and access to speech-driven technologies such as smart home equipment and smart assistants. Participants with atypical speech, due to a range of conditions such as cerebral palsy, acquired brain injury, Down syndrome, stroke and hearing impairment, were recruited, primarily through organisations supporting disabled people. Most had physical or learning disabilities in addition to dysarthric speech. The project team worked with individuals, their families and local support teams, to provide access to the app, including through additional assistive technologies where needed. Testing was user-led, with participants asked to identify and test use cases most relevant to their daily lives over a period of three months or more. Ongoing technical support and training were provided remotely and in-person throughout the testing period. Structured interviews were used to collect feedback on users' experiences, with delivery adapted to individuals' needs and preferences. Informal feedback was collected through ongoing contact between participants, their families and support teams and the project team. Focus groups were held to collect feedback on specific design proposals. User feedback shared with developers has led to improvements to the user interface and functionality, including faster voice training, simplified navigation, the introduction of gamification elements and of switch access as an alternative to touchscreen access, with other feature requests from users still in development. This work offers a case-study in successful and inclusive co-design with the disabled community.Keywords: co-design, assistive technology, dysarthria, inclusive speech recognition
Procedia PDF Downloads 1101113 Area Exclosure as a Government Strategy to Restore Woody Plant Species Diversity: Case Study in Southern Ethiopia
Authors: Tsegaw Abebe, Temesgen Abebe
Abstract:
Land degradation is one of a serious environmental challenge in Ethiopia and is one of the major underlying causes for declining agricultural productivity. The Ethiopia government realized the significance of environmental restoration specifically on deforested and degraded land after the 1973 and 1984/85 major famines that struck the country. Among the various conservation strategies, the establishment of area exclosures have been regarded as an effective response to halt and reverse the problems of land degradation. There are limited studies in Ethiopia dealing how the conversion of free grazing lands and degraded lands by closures increase biomass accumulation. However, these studies are not sufficient to conclude about the strength of area closures to restore degraded vegetations at the diverse agro-ecological condition. The overall objective of this study was, therefore, to assess and evaluate the usefulness of area closure technique in enhancing rehabilitation of degraded ecosystem and thereby increase the natural capital in the study site (southern Ethiopia). Woody plant species were collected from area exclosure for eight year and adjacent degraded land with similar landscape positions using systematic sampling plot design technique. Woody species diversity was determined by Shannon diversity. Comparative assessment result of woody plant species analysis showed that the density of woody species in the exclosure and degraded site were 778 and 222 individuals per hectare, respectively. A total of 16 woody species, representing 12 families were recorded in the study site. Out of the 12 families, all were recorded in the exclosure while 5 were recorded in the degraded site. Out of the 16 species, 15 were recorded in the exclosure while six were in the degraded site. A total of 10 species were recorded in the exclosure, which were absent in the degraded site. Similarly, one species was recorded in the degraded site which was not present in the exclosure. The results showed that protecting of degraded site from human and animal disturbances promotes woody plant species regenerations and productivity Apart from increasing woody plant species, the local communities have benefited from the exclosure in the form of both products (grass harvesting) and services (ecological). Due to this reason the local communities have positive attitudes and contribute a lot for the success of enclosures in the study site. The present study clearly showed that area closure interventions should be oriented towards managing and improving the productivity of the degraded land, in such a way that both the need for conservation of biodiversity and environmental sustainability, and the demands of the local people for biomass resources can be achieved.Keywords: degraded land, exclosure, land restoration, woody vegetation
Procedia PDF Downloads 4271112 Fake news and Conspiracy Narratives in the Covid-19 Crisis: An International Comparison
Authors: Caja Thimm
Abstract:
Already well before the Corona pandemic hit the world, ‘fake news‘ were no longer regarded as harmless twists of the truth but as intentionally composed disinformation, often with the goal of manipulative populist propaganda. During the Corona crisis, particularly conspiracy narratives have become a worldwide phenomenon with dangerous consequences (anti vaccination myths). The success of these manipulated news need s to be counteracted by trustworthy news, which in Europe particularly includes public broadcasting media and their social media channels. To understand better how the main public broadcasters in Germany, the UK, and France used Instagram strategically, a comparative study was carried out. The study – comparative analysis of Instagram during the Corona Crisis In our empirical study, we compared the activities by selected formats during the Corona crisis in order to see how the public broadcasters reached their audiences and how this might, in the longer run, affect journalistic strategies on social media platforms. First analysis showed that the increase in the use of social media overall was striking. Almost one in two adult online users (48 %) obtained information about the virus in social media, and in total, 38% of the younger age group (18-24) looked for Covid19 information on Instagram, so the platform can be regarded as one of the central digital spaces for Corona related information searches. Quantitative measures showed that 47% of recent posts by the broadcasters were related to Corona, and 7% treated conspiracy myths. For the more detailed content analysis, the following categories of analysis were applied: • Digital storytelling and instastories • Textuality and semantic keys • links to information • stickers • videochat • fact checking • news ticker • service • infografics and animated tables Additionally to these basic features, we particularly looked for new formats created during the crisis. Journalistic use of social media platforms opens up immediate and creative ways of applying the media logics of the respective platforms, and particularly the BBC and ARD formats proved to be interactive, responsive, and entertaining. Among them were new formats such as a space for user questions and personal uploads, interviews, music, comedy, etc. Particularly the fact checking channel got a lot of attention, as many user questions were focused on the conspiracy theories, which dominated the public discourse during many weeks in 2020. In the presentation, we will introduce eight particular strategies that show how public broadcasting journalism can adopt digital platforms and use them creatively and, hence help to counteract against conspiracy narratives and fake news.Keywords: fake news, social media, digital journalism, digital methods
Procedia PDF Downloads 1561111 Numerical Simulation of Precast Concrete Panels for Airfield Pavement
Authors: Josef Novák, Alena Kohoutková, Vladimír Křístek, Jan Vodička
Abstract:
Numerical analysis software belong to the main tools for simulating the real behavior of various concrete structures and elements. In comparison with experimental tests, they offer an affordable way to study the mechanical behavior of structures under various conditions. The contribution deals with a precast element of an innovative airfield pavement system which is being developed within an ongoing scientific project. The proposed system consists a two-layer surface course of precast concrete panels positioned on a two-layer base of fiber-reinforced concrete with recycled aggregate. As the panels are supposed to be installed directly on the hardened base course, imperfections at the interface between the base course and surface course are expected. Considering such circumstances, three various behavior patterns could be established and considered when designing the precast element. Enormous costs of full-scale experiments force to simulate the behavior of the element in a numerical analysis software using finite element method. The simulation was conducted on a nonlinear model in order to obtain such results which could fully compensate results from the experiments. First, several loading schemes were considered with the aim to observe the critical one which was used for the simulation later on. The main objective of the simulation was to optimize reinforcement of the element subject to quasi-static loading from airplanes. When running the simulation several parameters were considered. Namely, it concerns geometrical imperfections, manufacturing imperfections, stress state in reinforcement, stress state in concrete and crack width. The numerical simulation revealed that the precast element should be heavily reinforced to fulfill all the demands assumed. The main cause of using high amount of reinforcement is the size of the imperfections which could occur at real structure. Improving manufacturing quality, the installation of the precast panels on a fresh base course or using a bedding layer underneath the surface course belong to the main steps how to reduce the size of imperfections and consequently lower the consumption of reinforcement.Keywords: nonlinear analysis, numerical simulation, precast concrete, pavement
Procedia PDF Downloads 2561110 Contribution of the Corn Milling Industry to a Global and Circular Economy
Authors: A. B. Moldes, X. Vecino, L. Rodriguez-López, J. M. Dominguez, J. M. Cruz
Abstract:
The concept of the circular economy is focus on the importance of providing goods and services sustainably. Thus, in a future it will be necessary to respond to the environmental contamination and to the use of renewables substrates by moving to a more restorative economic system that drives towards the utilization and revalorization of residues to obtain valuable products. During its evolution our industrial economy has hardly moved through one major characteristic, established in the early days of industrialization, based on a linear model of resource consumption. However, this industrial consumption system will not be maintained during long time. On the other hand, there are many industries, like the corn milling industry, that although does not consume high amount of non renewable substrates, they produce valuable streams that treated accurately, they could provide additional, economical and environmental, benefits by the extraction of interesting commercial renewable products, that can replace some of the substances obtained by chemical synthesis, using non renewable substrates. From this point of view, the use of streams from corn milling industry to obtain surface-active compounds will decrease the utilization of non-renewables sources for obtaining this kind of compounds, contributing to a circular and global economy. However, the success of the circular economy depends on the interest of the industrial sectors in the revalorization of their streams by developing relevant and new business models. Thus, it is necessary to invest in the research of new alternatives that reduce the consumption of non-renewable substrates. In this study is proposed the utilization of a corn milling industry stream to obtain an extract with surfactant capacity. Once the biosurfactant is extracted, the corn milling stream can be commercialized as nutritional media in biotechnological process or as animal feed supplement. Usually this stream is combined with other ingredients obtaining a product namely corn gluten feed or may be sold separately as a liquid protein source for beef and dairy feeding, or as a nutritional pellet binder. Following the productive scheme proposed in this work, the corn milling industry will obtain a biosurfactant extract that could be incorporated in its productive process replacing those chemical detergents, used in some point of its productive chain, or it could be commercialized as a new product of the corn manufacture. The biosurfactants obtained from corn milling industry could replace the chemical surfactants in many formulations, and uses, and it supposes an example of the potential that many industrial streams could offer for obtaining valuable products when they are manage properly.Keywords: biosurfactantes, circular economy, corn, sustainability
Procedia PDF Downloads 261