Search results for: variable step size
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10163

Search results for: variable step size

2393 Investigating the Viability of Small-Scale Rapid Alloy Prototyping of Interstitial Free Steels

Authors: Talal S. Abdullah, Shahin Mehraban, Geraint Lodwig, Nicholas P. Lavery

Abstract:

The defining property of Interstitial Free (IF) steels is formability, comprehensively measured using the Lankford coefficient (r-value) on uniaxial tensile test data. The contributing factors supporting this feature are grain size, orientation, and elemental additions. The processes that effectively modulate these factors are the casting procedure, hot rolling, and heat treatment. An existing methodology is well-practised in the steel Industry; however, large-scale production and experimentation consume significant proportions of time, money, and material. Introducing small-scale rapid alloy prototyping (RAP) as an alternative process would considerably reduce the drawbacks relative to standard practices. The aim is to finetune the existing fundamental procedures implemented in the industrial plant to adapt to the RAP route. IF material is remelted in the 80-gram coil induction melting (CIM) glovebox. To birth small grains, maximum deformation must be induced onto the cast material during the hot rolling process. The rolled strip must then satisfy the polycrystalline behaviour of the bulk material by displaying a resemblance in microstructure, hardness, and formability to that of the literature and actual plant steel. A successful outcome of this work is that small-scale RAP can achieve target compositions with similar microstructures and statistically consistent mechanical properties which complements and accelerates the development of novel steel grades.

Keywords: rapid alloy prototyping, plastic anisotropy, interstitial free, miniaturised tensile testing, formability

Procedia PDF Downloads 107
2392 Arc Interruption Design for DC High Current/Low SC Fuses via Simulation

Authors: Ali Kadivar, Kaveh Niayesh

Abstract:

This report summarizes a simulation-based approach to estimate the current interruption behavior of a fuse element utilized in a DC network protecting battery banks under different stresses. Due to internal resistance of the battries, the short circuit current in very close to the nominal current, and it makes the fuse designation tricky. The base configuration considered in this report consists of five fuse units in parallel. The simulations are performed using a multi-physics software package, COMSOL® 5.6, and the necessary material parameters have been calculated using two other software packages.The first phase of the simulation starts with the heating of the fuse elements resulted from the current flow through the fusing element. In this phase, the heat transfer between the metallic strip and the adjacent materials results in melting and evaporation of the filler and housing before the aluminum strip is evaporated and the current flow in the evaporated strip is cut-off, or an arc is eventually initiated. The initiated arc starts to expand, so the entire metallic strip is ablated, and a long arc of around 20 mm is created within the first 3 milliseconds after arc initiation (v_elongation = 6.6 m/s. The final stage of the simulation is related to the arc simulation and its interaction with the external circuitry. Because of the strong ablation of the filler material and venting of the arc caused by the melting and evaporation of the filler and housing before an arc initiates, the arc is assumed to burn in almost pure ablated material. To be able to precisely model this arc, one more step related to the derivation of the transport coefficients of the plasma in ablated urethane was necessary. The results indicate that an arc current interruption, in this case, will not be achieved within the first tens of milliseconds. In a further study, considering two series elements, the arc was interrupted within few milliseconds. A very important aspect in this context is the potential impact of many broken strips parallel to the one where the arc occurs. The generated arcing voltage is also applied to the other broken strips connected in parallel with arcing path. As the gap between the other strips is very small, a large voltage of a few hundred volts generated during the current interruption may eventually lead to a breakdown of another gap. As two arcs in parallel are not stable, one of the arcs will extinguish, and the total current will be carried by one single arc again. This process may be repeated several times if the generated voltage is very large. The ultimate result would be that the current interruption may be delayed.

Keywords: DC network, high current / low SC fuses, FEM simulation, paralle fuses

Procedia PDF Downloads 61
2391 Comparative Histological, Immunohistochemical and Biochemical Study on the Effect of Vit. C, Vit. E, Gallic Acid and Silymarin on Carbon Tetrachloride Model of Liver Fibrosis in Rats

Authors: Safaa S. Hassan, Mohammed H. Elbakry, Safwat A. Mangoura, Zainab M. Omar

Abstract:

Background: Liver fibrosis is the main reason for increased mortality in chronic liver disease. It has no standard treatment. Antioxidants from a variety of sources are capable of slowing or preventing oxidation of other molecules. Aim: to evaluate the hepatoprotective effect of vit. C, vit. E and gallic acid in comparison to silymarin in the rat model of carbon tetrachloride induced liver fibrosis and their possible mechanisms of action. Material& Methods: A total number of 60 adult male albino rats 160-200gm were divided into six equal groups; received subcutaneous (s.c) injection for 8 weeks. Group I: as control. Group II: received 1.5 mL/kg of CCL4 .Group III: CCL4 and co- treatment with silymarin 100mg/kg p.o. daily. Group IV: CCL4 and co-treatment with vit. C 50mg/kg p.o. daily. Group V: CCL4 and co-treatment with vit. E 200mg/kg. p.o. Group VI: CCL4 and co-treatment with Gallic acid 100mg/kg. p.o. daily. Liver was processed for histological and immunohistochemical examination. Levels of AST, ALT, ALP, reduced GSH, MDA, SOD and hydroxyproline concentration were measured and evaluated statistically. Results: Light and electron microscopic examination of liver of group II exhibited foci of altered cells with dense nuclei and vacuolated, granular cytoplasm, mononuclear cell infiltration in portal areas, profuse collagen fiber deposits were found around portal tract, more intense staining α-SMA-positive cells occupied most of the liver fibrosis tissue, electron lucent areas in the cytoplasm of the hepatocytes, margination of nuclear chromatin. Treatment by any of the antioxidants variably reduced the hepatic structural changes induced by CCL4. Biochemical analysis showed that carbon tetrachloride significantly increased the levels of serum AST, ALT, ALP, hepatic malondialdehyde and hydroxyproline content. Moreover, it decreased the activities of superoxide dismutase and glutathione. Treatment with silymarin, gallic acid, vit. C and vit. E decreased significantly the AST, ALT, and ALP levels in plasma, MDA and hydroxyproline and increased the activities of SOD and glutathione in liver tissue. The effect of administration of CCl4 was improved with the used antioxidants in variable degrees. The most efficient antioxidant was silymarin followed by gallic acid and vit. C then vit. E. It is possibly due to their antioxidant effect, free radical scavenging properties and the reduction of oxidant dependent activation and proliferation of HSCs. Conclusion: So these antioxidants can be a promising drugs candidate for ameliorating liver fibrosis better than the use of the drugs and their side effects.

Keywords: antioxidant, ccl4, gallic acid, liver fibrosis

Procedia PDF Downloads 269
2390 Co-Disposal of Coal Ash with Mine Tailings in Surface Paste Disposal Practices: A Gold Mining Case Study

Authors: M. L. Dinis, M. C. Vila, A. Fiúza, A. Futuro, C. Nunes

Abstract:

The present paper describes the study of paste tailings prepared in laboratory using gold tailings, produced in a Finnish gold mine with the incorporation of coal ash. Natural leaching tests were conducted with the original materials (tailings, fly and bottom ashes) and also with paste mixtures that were prepared with different percentages of tailings and ashes. After leaching, the solid wastes were physically and chemically characterized and the results were compared to those selected as blank – the unleached samples. The tailings and the coal ash, as well as the prepared mixtures, were characterized, in addition to the textural parameters, by the following measurements: grain size distribution, chemical composition and pH. Mixtures were also tested in order to characterize their mechanical behavior by measuring the flexural strength, the compressive strength and the consistency. The original tailing samples presented an alkaline pH because during their processing they were previously submitted to pressure oxidation with destruction of the sulfides. Therefore, it was not possible to ascertain the effect of the coal ashes in the acid mine drainage. However, it was possible to verify that the paste reactivity was affected mostly by the bottom ash and that the tailings blended with bottom ash present lower mechanical strength than when blended with a combination of fly and bottom ash. Surface paste disposal offer an attractive alternative to traditional methods in addition to the environmental benefits of incorporating large-volume wastes (e.g. bottom ash). However, a comprehensive characterization of the paste mixtures is crucial to optimize paste design in order to enhance engineer and environmental properties.

Keywords: coal ash, mine tailings, paste blends, surface disposal

Procedia PDF Downloads 290
2389 Mechanical Properties of Hybrid Ti6Al4V Part with Wrought Alloy to Powder-Bed Additive Manufactured Interface

Authors: Amnon Shirizly, Ohad Dolev

Abstract:

In recent years, the implementation and use of Metal Additive Manufacturing (AM) parts increase. As a result, the demand for bigger parts rises along with the desire to reduce it’s the production cost. Generally, in powder bed Additive Manufacturing technology the part size is limited by the machine build volume. In order to overcome this limitation, the parts can be built in one or more machine operations and mechanically joint or weld them together. An alternative option could be a production of wrought part and built on it the AM structure (mainly to reduce costs). In both cases, the mechanical properties of the interface have to be defined and recognized. In the current study, the authors introduce guidelines on how to examine the interface between wrought alloy and powder-bed AM. The mechanical and metallurgical properties of the Ti6Al4V materials (wrought alloy and powder-bed AM) and their hybrid interface were examined. The mechanical properties gain from tensile test bars in the built direction and fracture toughness samples in various orientations. The hybrid specimens were built onto a wrought Ti6Al4V start-plate. The standard fracture toughness (CT25 samples) and hybrid tensile specimens' were heat treated and milled as a post process to final diminutions. In this Study, the mechanical tensile tests and fracture toughness properties supported by metallurgical observation will be introduced and discussed. It will show that the hybrid approach of utilizing powder bed AM onto wrought material expanding the current limitation of the future manufacturing technology.

Keywords: additive manufacturing, hybrid, fracture-toughness, powder bed

Procedia PDF Downloads 100
2388 Study of Polish and Ukrainian Volunteers Helping War Refugees. Psychological and Motivational Conditions of Coping with Stress of Volunteer Activity

Authors: Agata Chudzicka-Czupała, Nadiya Hapon, Liudmyla Karamushka, Marta żywiołek-Szeja

Abstract:

Objectives: The study is about the determinants of coping with stress connected with volunteer activity for Russo-Ukrainian war 2022 refugees. We examined the mental health reactions, chosen psychological traits, and motivational functions of volunteers working in Poland and Ukraine in relation to their coping with stress styles. The study was financed with funds from the Foundation for Polish Science in the framework of the FOR UKRAINE Programme. Material and Method: The study was conducted in 2022. The study was a quantitative, questionnaire-based survey. Data was collected through an online survey. The volunteers were asked to assess their propensity to use different styles of coping with stress connected with their activity for Russo-Ukrainian war refugees using The Brief Coping Orientation to Problems Experienced Inventory (Brief-COPE) questionnaire. Depression, anxiety, and stress were measured using the Depression, Anxiety, and Stress (DASS)-21 item scale. Chosen psychological traits, psychological capital and hardiness, were assessed by The Psychological Capital Questionnaire and The Norwegian Revised Scale of Hardiness (DRS-15R). Then The Volunteer Function Inventory (VFI) was used. The significance of differences between the variable means of the samples was tested by the Student's t-test. We used multivariate linear regression to identify factors associated with coping with stress styles separately for each national sample. Results: The sample consisted of 720 volunteers helping war refugees (in Poland, 435 people, and 285 in Ukraine). The results of the regression analysis indicate variables that are significant predictors of the propensity to use particular styles of coping with stress (problem-focused style, emotion-focused style and avoidant coping). These include levels of depression and stress, individual psychological characteristics and motivational functions, different for Polish and Ukrainians. Ukrainian volunteers are significantly more likely to use all three coping with stress styles than Polish ones. The results also prove significant differences in the severity of anxiety, stress and depression, the selected psychological traits and motivational functions studied, which led volunteers to participate in activities for war refugees. Conclusions: The results show that depression and stress severity, as well as psychological capital and hardiness, and motivational factors are connected with coping with stress behavior. The results indicate the need for increased attention to the well-being of volunteers acting under stressful conditions. They also prove the necessity of guiding the selection of people for specific types of volu

Keywords: anxiety, coping with stress styles, depression, hardiness, mental health, motivational functions, psychological capital, resilience, stress, war, volunteer, civil society

Procedia PDF Downloads 67
2387 A New Multi-Target, Multi-Agent Search and Rescue Path Planning Approach

Authors: Jean Berger, Nassirou Lo, Martin Noel

Abstract:

Perfectly suited for natural or man-made emergency and disaster management situations such as flood, earthquakes, tornadoes, or tsunami, multi-target search path planning for a team of rescue agents is known to be computationally hard, and most techniques developed so far come short to successfully estimate optimality gap. A novel mixed-integer linear programming (MIP) formulation is proposed to optimally solve the multi-target multi-agent discrete search and rescue (SAR) path planning problem. Aimed at maximizing cumulative probability of successful target detection, it captures anticipated feedback information associated with possible observation outcomes resulting from projected path execution, while modeling agent discrete actions over all possible moving directions. Problem modeling further takes advantage of network representation to encompass decision variables, expedite compact constraint specification, and lead to substantial problem-solving speed-up. The proposed MIP approach uses CPLEX optimization machinery, efficiently computing near-optimal solutions for practical size problems, while giving a robust upper bound obtained from Lagrangean integrality constraint relaxation. Should eventually a target be positively detected during plan execution, a new problem instance would simply be reformulated from the current state, and then solved over the next decision cycle. A computational experiment shows the feasibility and the value of the proposed approach.

Keywords: search path planning, search and rescue, multi-agent, mixed-integer linear programming, optimization

Procedia PDF Downloads 366
2386 Space Tourism Pricing Model Revolution from Time Independent Model to Time-Space Model

Authors: Kang Lin Peng

Abstract:

Space tourism emerged in 2001 and became famous in 2021, following the development of space technology. The space market is twisted because of the excess demand. Space tourism is currently rare and extremely expensive, with biased luxury product pricing, which is the seller’s market that consumers can not bargain with. Spaceship companies such as Virgin Galactic, Blue Origin, and Space X have been charged space tourism prices from 200 thousand to 55 million depending on various heights in space. There should be a reasonable price based on a fair basis. This study aims to derive a spacetime pricing model, which is different from the general pricing model on the earth’s surface. We apply general relativity theory to deduct the mathematical formula for the space tourism pricing model, which covers the traditional time-independent model. In the future, the price of space travel will be different from current flight travel when space travel is measured in lightyear units. The pricing of general commodities mainly considers the general equilibrium of supply and demand. The pricing model considers risks and returns with the dependent time variable as acceptable when commodities are on the earth’s surface, called flat spacetime. Current economic theories based on the independent time scale in the flat spacetime do not consider the curvature of spacetime. Current flight services flying the height of 6, 12, and 19 kilometers are charging with a pricing model that measures time coordinate independently. However, the emergence of space tourism is flying heights above 100 to 550 kilometers that have enlarged the spacetime curvature, which means tourists will escape from a zero curvature on the earth’s surface to the large curvature of space. Different spacetime spans should be considered in the pricing model of space travel to echo general relativity theory. Intuitively, this spacetime commodity needs to consider changing the spacetime curvature from the earth to space. We can assume the value of each spacetime curvature unit corresponding to the gradient change of each Ricci or energy-momentum tensor. Then we know how much to spend by integrating the spacetime from the earth to space. The concept is adding a price p component corresponding to the general relativity theory. The space travel pricing model degenerates into a time-independent model, which becomes a model of traditional commodity pricing. The contribution is that the deriving of the space tourism pricing model will be a breakthrough in philosophical and practical issues for space travel. The results of the space tourism pricing model extend the traditional time-independent flat spacetime mode. The pricing model embedded spacetime as the general relativity theory can better reflect the rationality and accuracy of space travel on the universal scale. The universal scale from independent-time scale to spacetime scale will bring a brand-new pricing concept for space traveling commodities. Fair and efficient spacetime economics will also bring to humans’ travel when we can travel in lightyear units in the future.

Keywords: space tourism, spacetime pricing model, general relativity theory, spacetime curvature

Procedia PDF Downloads 119
2385 Success Factors for Innovations in SME Networks

Authors: J. Gochermann

Abstract:

Due to complex markets and products, and increasing need to innovate, cooperation between small and medium size enterprises arose during the last decades, which are not prior driven by process optimization or sales enhancement. Especially small and medium sized enterprises (SME) collaborate increasingly in innovation and knowledge networks to enhance their knowledge and innovation potential, and to find strategic partners for product and market development. These networks are characterized by dual objectives, the superordinate goal of the total network, and the specific objectives of the network members, which can cause target conflicts. Moreover, most SMEs do not have structured innovation processes and they are not accustomed to collaborate in complex innovation projects in an open network structure. On the other hand, SMEs have suitable characteristics for promising networking. They are flexible and spontaneous, they have flat hierarchies, and the acting people are not anonymous. These characteristics indeed distinguish them from bigger concerns. Investigation of German SME networks have been done to identify success factors for SME innovation networks. The fundamental network principles, donation-return and confidence, could be confirmed and identified as basic success factors. Further factors are voluntariness, adequate number of network members, quality of communication, neutrality and competence of the network management, as well as reliability and obligingness of the network services. Innovation and knowledge networks with an appreciable number of members from science and technology institutions need also active sense-making to bring different disciplines into successful collaboration. It has also been investigated, whether and how the involvement in an innovation network impacts the innovation structure and culture inside the member companies. The degree of reaction grows with time and intensity of commitment.

Keywords: innovation and knowledge networks, SME, success factors, innovation structure and culture

Procedia PDF Downloads 278
2384 Chitosan Coated Liposome Incorporated Cyanobacterial Pigment for Nasal Administration in the Brain Stroke

Authors: Kyou Hee Shim, Hwa Sung Shin

Abstract:

When a thrombolysis agent is administered to treat ischemic stroke, excessive reactive oxygen species are generated due to a sudden provision of oxygen and occurs secondary damage cell necrosis. Thus, it is necessary to administrate adjuvant as well as thrombolysis agent to protect and reduce damaged tissue. As cerebral blood vessels have specific structure called blood-brain barrier (BBB), it is not easy to transfer substances from blood to tissue. Therefore, development of a drug carrier is required to increase drug delivery efficiency to brain tissue. In this study, cyanobacterial pigment from the blue-green algae known for having neuroprotective effect as well as antioxidant effect was nasally administrated for bypassing BBB. In order to deliver cyanobacterial pigment efficiently, the nano-sized liposome was used as a carrier. Liposomes were coated with a positive charge of chitosan since negative residues are present at the nasal mucosa the first gateway of nasal administration. Characteristics of liposome including morphology, size and zeta potential were analyzed by transmission electron microscope (TEM) and zeta analyzer. As a result of cytotoxic test, the liposomes were not harmful. Also, being administered a drug to the ischemic stroke animal model, we could confirm that the pharmacological effect of the pigment delivered by chitosan coated liposome was enhanced compared to that of non-coated liposome. Consequently, chitosan coated liposome could be considered as an optimized drug delivery system for the treatment of acute ischemic stroke.

Keywords: ischemic stroke, cyanobacterial pigment, liposome, chitosan, nasal administration

Procedia PDF Downloads 220
2383 Biodegradation of Phenazine-1-Carboxylic Acid by Rhodanobacter sp. PCA2 Proceeds via Decarboxylation and Cleavage of Nitrogen-Containing Ring

Authors: Miaomiao Zhang, Sabrina Beckmann, Haluk Ertan, Rocky Chau, Mike Manefield

Abstract:

Phenazines are a large class of nitrogen-containing aromatic heterocyclic compounds, which are almost exclusively produced by bacteria from diverse genera including Pseudomonas and Streptomyces. Phenazine-1-carboxylic acid (PCA) as one of 'core' phenazines are converted from chorismic acid before modified to other phenazine derivatives in different cells. Phenazines have attracted enormous interests because of their multiple roles on biocontrol, bacterial interaction, biofilm formation and fitness of their producers. However, in spite of ecological importance, degradation as a part of phenazines’ fate only have extremely limited attention now. Here, to isolate PCA-degrading bacteria, 200 mg L-1 PCA was supplied as sole carbon, nitrogen and energy source in minimal mineral medium. Quantitative PCR and Reverse-transcript PCR were employed to study abundance and activity of functional gene MFORT 16269 in PCA degradation, respectively. Intermediates and products of PCA degradation were identified with LC-MS/MS. After enrichment and isolation, a PCA-degrading strain was selected from soil and was designated as Rhodanobacter sp. PCA2 based on full 16S rRNA sequencing. As determined by HPLC, strain PCA2 consumed 200 mg L-1 (836 µM) PCA at a rate of 17.4 µM h-1, accompanying with significant cells yield from 1.92 × 105 to 3.11 × 106 cells per mL. Strain PCA2 was capable of degrading other phenazines as well, including phenazine (4.27 µM h-1), pyocyanin (2.72 µM h-1), neutral red (1.30 µM h-1) and 1-hydroxyphenazine (0.55 µM h-1). Moreover, during the incubation, transcript copies of MFORT 16269 gene increased significantly from 2.13 × 106 to 8.82 × 107 copies mL-1, which was 2.77 times faster than that of the corresponding gene copy number (2.20 × 106 to 3.32 × 107 copies mL-1), indicating that MFORT 16269 gene was activated and played roles on PCA degradation. As analyzed by LC-MS/MS, decarboxylation from the ring structure was determined as the first step of PCA degradation, followed by cleavage of nitrogen-containing ring by dioxygenase which catalyzed phenazine to nitrosobenzene. Subsequently, phenylhydroxylamine was detected after incubation for two days and was then transferred to aniline and catechol. Additionally, genomic and proteomic analyses were also carried out for strain PCA2. Overall, the findings presented here showed that a newly isolated strain Rhodanobacter sp. PCA2 was capable of degrading phenazines through decarboxylation and cleavage of nitrogen-containing ring, during which MFORT 16269 gene was activated and played important roles.

Keywords: decarboxylation, MFORT16269 gene, phenazine-1-carboxylic acid degradation, Rhodanobacter sp. PCA2

Procedia PDF Downloads 220
2382 Mineral Slag Used as an Alternative of Cement in Concrete

Authors: Eskinder Desta Shumuye, Jun Zhao, Zike Wang

Abstract:

This paper summarizes the results of experimental studies carried out at Zhengzhou University, School of Mechanics and Engineering Science, research laboratory, on the performance of concrete produced by combining Ordinary Portland Cement (OPC) with Ground-Granulated Blast Furnace Slag (GGBS). Concrete specimens cast with OPC and various percentage of GGBS (0%, 30%, 50%, and 70%) were subjected to high temperature exposure and extensive experimental test reproducing basic freeze-thaw cycle and a chloride-ion attack to determine their combined effects within the concrete samples. From the experimental studies, comparisons were made on the physical, mechanical, and microstructural properties in compassion with ordinary Portland cement concrete (OPC). Further, durability of GGBS cement concrete, such as exposure to accelerated carbonation, chloride ion attack, and freeze-thaw action in compassion with various percentage of GGBS and ordinary Portland cement concrete of similar mixture composition was analyzed. The microstructure, mineralogical composition, and pore size distribution of concrete specimens were determined via Scanning Electron Microscopy (SEM) analysis and X-Ray Diffraction (XRD). The result demonstrated that when the exposure temperature increases from 200 ºC to 400 ºC, the residual compressive strength was fluctuating for all concrete group, and compressive strength and chloride ion exposure of the concrete decreased with the increasing of slag content. The SEM and EDS results showed an increase in carbonation rate with increasing in slag content.

Keywords: accelerated carbonation, chloride-ion, concrete, ground-granulated blast furnace slag, GGBS, high-temperature

Procedia PDF Downloads 137
2381 Accuracy of VCCT for Calculating Stress Intensity Factor in Metal Specimens Subjected to Bending Load

Authors: Sanjin Kršćanski, Josip Brnić

Abstract:

Virtual Crack Closure Technique (VCCT) is a method used for calculating stress intensity factor (SIF) of a cracked body that is easily implemented on top of basic finite element (FE) codes and as such can be applied on the various component geometries. It is a relatively simple method that does not require any special finite elements to be used and is usually used for calculating stress intensity factors at the crack tip for components made of brittle materials. This paper studies applicability and accuracy of VCCT applied on standard metal specimens containing trough thickness crack, subjected to an in-plane bending load. Finite element analyses were performed using regular 4-node, regular 8-node and a modified quarter-point 8-node 2D elements. Stress intensity factor was calculated from the FE model results for a given crack length, using data available from FE analysis and a custom programmed algorithm based on virtual crack closure technique. Influence of the finite element size on the accuracy of calculated SIF was also studied. The final part of this paper includes a comparison of calculated stress intensity factors with results obtained from analytical expressions found in available literature and in ASTM standard. Results calculated by this algorithm based on VCCT were found to be in good correlation with results obtained with mentioned analytical expressions.

Keywords: VCCT, stress intensity factor, finite element analysis, 2D finite elements, bending

Procedia PDF Downloads 299
2380 Attributable Mortality of Nosocomial Infection: A Nested Case Control Study in Tunisia

Authors: S. Ben Fredj, H. Ghali, M. Ben Rejeb, S. Layouni, S. Khefacha, L. Dhidah, H. Said

Abstract:

Background: The Intensive Care Unit (ICU) provides continuous care and uses a high level of treatment technologies. Although developed country hospitals allocate only 5–10% of beds in critical care areas, approximately 20% of nosocomial infections (NI) occur among patients treated in ICUs. Whereas in the developing countries the situation is still less accurate. The aim of our study is to assess mortality rates in ICUs and to determine its predictive factors. Methods: We carried out a nested case-control study in a 630-beds public tertiary care hospital in Eastern Tunisia. We included in the study all patients hospitalized for more than two days in the surgical or medical ICU during the entire period of the surveillance. Cases were patients who died before ICU discharge, whereas controls were patients who survived to discharge. NIs were diagnosed according to the definitions of ‘Comité Technique des Infections Nosocomiales et les Infections Liées aux Soins’ (CTINLIS, France). Data collection was based on the protocol of Rea-RAISIN 2009 of the National Institute for Health Watch (InVS, France). Results: Overall, 301 patients were enrolled from medical and surgical ICUs. The mean age was 44.8 ± 21.3 years. The crude ICU mortality rate was 20.6% (62/301). It was 35.8% for patients who acquired at least one NI during their stay in ICU and 16.2% for those without any NI, yielding an overall crude excess mortality rate of 19.6% (OR= 2.9, 95% CI, 1.6 to 5.3). The population-attributable fraction due to ICU-NI in patients who died before ICU discharge was 23.46% (95% CI, 13.43%–29.04%). Overall, 62 case-patients were compared to 239 control patients for the final analysis. Case patients and control patients differed by age (p=0,003), simplified acute physiology score II (p < 10-3), NI (p < 10-3), nosocomial pneumonia (p=0.008), infection upon admission (p=0.002), immunosuppression (p=0.006), days of intubation (p < 10-3), tracheostomy (p=0.004), days with urinary catheterization (p < 10-3), days with CVC ( p=0.03), and length of stay in ICU (p=0.003). Multivariate analysis demonstrated 3 factors: age older than 65 years (OR, 5.78 [95% CI, 2.03-16.05] p=0.001), duration of intubation 1-10 days (OR, 6.82 [95% CI, [1.90-24.45] p=0.003), duration of intubation > 10 days (OR, 11.11 [95% CI, [2.85-43.28] p=0.001), duration of CVC 1-7 days (OR, 6.85[95% CI, [1.71-27.45] p=0.007) and duration of CVC > 7 days (OR, 5.55[95% CI, [1.70-18.04] p=0.004). Conclusion: While surveillance provides important baseline data, successful trials with more active intervention protocols, adopting multimodal approach for the prevention of nosocomial infection incited us to think about the feasibility of similar trial in our context. Therefore, the implementation of an efficient infection control strategy is a crucial step to improve the quality of care.

Keywords: intensive care unit, mortality, nosocomial infection, risk factors

Procedia PDF Downloads 403
2379 A Comparative Study of Optimization Techniques and Models to Forecasting Dengue Fever

Authors: Sudha T., Naveen C.

Abstract:

Dengue is a serious public health issue that causes significant annual economic and welfare burdens on nations. However, enhanced optimization techniques and quantitative modeling approaches can predict the incidence of dengue. By advocating for a data-driven approach, public health officials can make informed decisions, thereby improving the overall effectiveness of sudden disease outbreak control efforts. The National Oceanic and Atmospheric Administration and the Centers for Disease Control and Prevention are two of the U.S. Federal Government agencies from which this study uses environmental data. Based on environmental data that describe changes in temperature, precipitation, vegetation, and other factors known to affect dengue incidence, many predictive models are constructed that use different machine learning methods to estimate weekly dengue cases. The first step involves preparing the data, which includes handling outliers and missing values to make sure the data is prepared for subsequent processing and the creation of an accurate forecasting model. In the second phase, multiple feature selection procedures are applied using various machine learning models and optimization techniques. During the third phase of the research, machine learning models like the Huber Regressor, Support Vector Machine, Gradient Boosting Regressor (GBR), and Support Vector Regressor (SVR) are compared with several optimization techniques for feature selection, such as Harmony Search and Genetic Algorithm. In the fourth stage, the model's performance is evaluated using Mean Square Error (MSE), Mean Absolute Error (MAE), and Root Mean Square Error (RMSE) as assistance. Selecting an optimization strategy with the least number of errors, lowest price, biggest productivity, or maximum potential results is the goal. In a variety of industries, including engineering, science, management, mathematics, finance, and medicine, optimization is widely employed. An effective optimization method based on harmony search and an integrated genetic algorithm is introduced for input feature selection, and it shows an important improvement in the model's predictive accuracy. The predictive models with Huber Regressor as the foundation perform the best for optimization and also prediction.

Keywords: deep learning model, dengue fever, prediction, optimization

Procedia PDF Downloads 57
2378 The Nexus between Manpower Training and Corporate Compliance

Authors: Timothy Wale Olaosebikan

Abstract:

The most active resource in any organization is the manpower. Every other resource remains inactive unless there is competent manpower to handle them. Manpower training is needed to enhance productivity and overall performance of the organizations. This is due to the recognition of the important role of manpower training in attainment of organizational goals. Corporate Compliance conjures visions of an incomprehensible matrix of laws and regulations that defy logic and control by even the most seasoned manpower training professionals. Similarly, corporate compliance can be viewed as one of the most significant problems faced in manpower training process for any organization, therefore, commands relevant attention and comprehension. Consequently, this study investigated the nexus between manpower training and corporate compliance. Collection of data for the study was effected through the use of questionnaire with a sample size of 265 drawn by stratified random sampling. The data were analyzed using descriptive and inferential statistics. The findings of the study show that about 75% of the respondents agree that there is a strong relationship between manpower training and corporate compliance, which brings out the organizational attainment from any training process. The findings further show that most organisation do not totally comply with the rules guiding manpower training process thereby making the process less effective on organizational performance, which may affect overall profitability. The study concludes that formulation and compliance of adequate rules and guidelines for manpower trainings will produce effective results for both employees and the organization at large. The study recommends that leaders of organizations, industries, and institutions must ensure total compliance on the part of both the employees and the organization to manpower training rules. Organizations and stakeholders should also ensure that strict policies on corporate compliance to manpower trainings form the heart of their cardinal mission.

Keywords: corporate compliance, manpower training, nexus, rules and guidelines

Procedia PDF Downloads 136
2377 Sleep Ecology, Sleep Regulation and Behavior Problems in Maltreated Preschoolers: A Scoping Review

Authors: Sabrina Servot, Annick St-Amand, Michel Rousseau, Valerie Simard, Evelyne Touchette

Abstract:

Child maltreatment has a profound impact on children’s development. In its victims, internalizing and externalizing problems are highly prevalent, and sleep problems are common. Furthermore, the environment they live in is often disorganized, lacking routine and consistency. In non-maltreated children, several studies documented the important role of sleep regulation and sleep ecology. A poor sleep ecology (e.g., lack of sleep hygiene and bedtime routine, inappropriate sleeping location) may lead to sleep regulation problems (e.g., short sleep duration, nocturnal awakenings), and sleep regulation problems may increase the risk of behavior problems. Therefore, this scoping review aims to map evidence about sleep ecology and sleep regulation and the associations between sleep ecology, sleep regulation, and behavior problems in maltreated preschoolers. Literature from 1993 was searched in PsycInfo, Pubmed, Medline, Eric, and Proquest Dissertations and Theses. Articles and thesis were comprehensively reviewed based upon inclusion/exclusion criteria: 1) it concerns maltreated children aged 1-5 years, and 2) it addresses at least one of the following: sleep ecology, sleep regulation, and/or their associations with behavior problems in maltreated preschoolers. From the 650 studies screened, nine of them were included. Data were charted according to study characteristics, nature of variable documented, measures, analyses performed, and results of each study, then synthesized in a narrative summary. The main results show all included articles were quantitative. Foster children samples were used in four studies, children experienced different types of maltreatment in six studies, while one was specifically about sexually abused children. Regarding sleep ecology, only one study describing maltreated preschoolers’ sleep ecology was found, while seven studies documented sleep regulation. Among these seven studies, 17 different sleep variables (e.g., parasomnia, dyssomnia, total 24-h sleep duration) were used, each study documenting from one to nine of them. Actigraphic measures were employed in three studies, the others used parent-reported questionnaires or sleep diaries. Maltreated children’s sleep was described and/or compared to non-maltreated children’s sleep, or an intervention group, showing mild differences. As for associations between sleep regulation and behavior problems, five studies investigated it and performed correlational or linear regression analyses between sleep and behavior problems, revealing some significant associations. No study was found about associations between sleep ecology and sleep regulation, between sleep ecology and behavior problems, or between these three variables. In conclusion, literature about sleep ecology, sleep regulation, and their associations with behavior problems are far more scarce in maltreated preschoolers than in non-maltreated ones. At present, there is especially a paucity of research about sleep ecology and the association between sleep ecology and sleep regulation in maltreated preschoolers, while studies on non-maltreated children showed sleep ecology plays a major role in sleep regulation. In addition, as sleep regulation is measured in many different ways among the studies, it is difficult to compare their findings. Finally, it seems necessary that research fill these gaps, as recommendations could be made to clinicians working with maltreated preschoolers regarding the use of sleep ecology and sleep regulation as intervention tools.

Keywords: maltreated preschoolers, sleep ecology, sleep regulation, behavior problems

Procedia PDF Downloads 144
2376 Design and Assessment of Base Isolated Structures under Spectrum-Compatible Bidirectional Earthquakes

Authors: Marco Furinghetti, Alberto Pavese, Michele Rinaldi

Abstract:

Concave Surface Slider devices have been more and more used in real applications for seismic protection of both bridge and building structures. Several research activities have been carried out, in order to investigate the lateral response of such a typology of devices, and a reasonably high level of knowledge has been reached. If radial analysis is performed, the frictional force is always aligned with respect to the restoring force, whereas under bidirectional seismic events, a bi-axial interaction of the directions of motion occurs, due to the step-wise projection of the main frictional force, which is assumed to be aligned to the trajectory of the isolator. Nonetheless, if non-linear time history analyses have to be performed, standard codes provide precise rules for the definition of an averagely spectrum-compatible set of accelerograms in radial conditions, whereas for bidirectional motions different combinations of the single components spectra can be found. Moreover, nowadays software for the adjustment of natural accelerograms are available, which lead to a higher quality of spectrum-compatibility and to a smaller dispersion of results for radial motions. In this endeavor a simplified design procedure is defined, for building structures, base-isolated by means of Concave Surface Slider devices. Different case study structures have been analyzed. In a first stage, the capacity curve has been computed, by means of non-linear static analyses on the fixed-base structures: inelastic fiber elements have been adopted and different direction angles of lateral forces have been studied. Thanks to these results, a linear elastic Finite Element Model has been defined, characterized by the same global stiffness of the linear elastic branch of the non-linear capacity curve. Then, non-linear time history analyses have been performed on the base-isolated structures, by applying seven bidirectional seismic events. The spectrum-compatibility of bidirectional earthquakes has been studied, by considering different combinations of single components and adjusting single records: thanks to the proposed procedure, results have shown a small dispersion and a good agreement in comparison to the assumed design values.

Keywords: concave surface slider, spectrum-compatibility, bidirectional earthquake, base isolation

Procedia PDF Downloads 288
2375 Strategic Analysis of Energy and Impact Assessment of Microalgae Based Biodiesel and Biogas Production in Outdoor Raceway Pond: A Life Cycle Perspective

Authors: T. Sarat Chandra, M. Maneesh Kumar, S. N. Mudliar, V. S. Chauhan, S. Mukherji, R. Sarada

Abstract:

The life cycle assessment (LCA) of biodiesel production from freshwater microalgae Scenedesmus dimorphus cultivated in open raceway pond is performed. Various scenarios for biodiesel production were simulated using primary and secondary data. The parameters varied in the modelled scenarios were related to biomass productivity, mode of culture mixing and type of energy source. The process steps included algae cultivation in open raceway ponds, harvesting by chemical flocculation, dewatering by mechanical drying option (MDO) followed by extraction, reaction and purification. Anaerobic digestion of defatted algal biomass (DAB) for biogas generation is considered as a co-product allocation and the energy derived from DAB was thereby used in the upstream of the process. The scenarios were analysed for energy demand, emissions and environmental impacts within the boundary conditions grounded on "cradle to gate" inventory. Across all the Scenarios, cultivation via raceway pond was observed to be energy intensive process. The mode of culture mixing and biomass productivity determined the energy requirements of the cultivation step. Emissions to Freshwater were found to be maximum contributing to 93-97% of total emissions in all the scenarios. Global warming potential (GWP) was the found to be major environmental impact accounting to about 99% of total environmental impacts in all the modelled scenarios. It was noticed that overall emissions and impacts were directly related to energy demand and an inverse relationship was observed with biomass productivity. The geographic location of an energy source affected the environmental impact of a given process. The integration of defatted algal remnants derived electricity with the cultivation system resulted in a 2% reduction in overall energy demand. Direct biogas generation from microalgae post harvesting is also analysed. Energy surplus was observed after using part of the energy in upstream for biomass production. Results suggest biogas production from microalgae post harvesting as an environmentally viable and sustainable option compared to biodiesel production.

Keywords: biomass productivity, energy demand, energy source, Lifecycle Assessment (LCA), microalgae, open raceway pond

Procedia PDF Downloads 284
2374 Flood Mapping Using Height above the Nearest Drainage Model: A Case Study in Fredericton, NB, Canada

Authors: Morteza Esfandiari, Shabnam Jabari, Heather MacGrath, David Coleman

Abstract:

Flood is a severe issue in different places in the world as well as the city of Fredericton, New Brunswick, Canada. The downtown area of Fredericton is close to the Saint John River, which is susceptible to flood around May every year. Recently, the frequency of flooding seems to be increased, especially after the fact that the downtown area and surrounding urban/agricultural lands got flooded in two consecutive years in 2018 and 2019. In order to have an explicit vision of flood span and damage to affected areas, it is necessary to use either flood inundation modelling or satellite data. Due to contingent availability and weather dependency of optical satellites, and limited existing data for the high cost of hydrodynamic models, it is not always feasible to rely on these sources of data to generate quality flood maps after or during the catastrophe. Height Above the Nearest Drainage (HAND), a state-of-the-art topo-hydrological index, normalizes the height of a basin based on the relative elevation along with the stream network and specifies the gravitational or the relative drainage potential of an area. HAND is a relative height difference between the stream network and each cell on a Digital Terrain Model (DTM). The stream layer is provided through a multi-step, time-consuming process which does not always result in an optimal representation of the river centerline depending on the topographic complexity of that region. HAND is used in numerous case studies with quite acceptable and sometimes unexpected results because of natural and human-made features on the surface of the earth. Some of these features might cause a disturbance in the generated model, and consequently, the model might not be able to predict the flow simulation accurately. We propose to include a previously existing stream layer generated by the province of New Brunswick and benefit from culvert maps to improve the water flow simulation and accordingly the accuracy of HAND model. By considering these parameters in our processing, we were able to increase the accuracy of the model from nearly 74% to almost 92%. The improved model can be used for generating highly accurate flood maps, which is necessary for future urban planning and flood damage estimation without any need for satellite imagery or hydrodynamic computations.

Keywords: HAND, DTM, rapid floodplain, simplified conceptual models

Procedia PDF Downloads 143
2373 A Method to Evaluate and Compare Web Information Extractors

Authors: Patricia Jiménez, Rafael Corchuelo, Hassan A. Sleiman

Abstract:

Web mining is gaining importance at an increasing pace. Currently, there are many complementary research topics under this umbrella. Their common theme is that they all focus on applying knowledge discovery techniques to data that is gathered from the Web. Sometimes, these data are relatively easy to gather, chiefly when it comes from server logs. Unfortunately, there are cases in which the data to be mined is the data that is displayed on a web document. In such cases, it is necessary to apply a pre-processing step to first extract the information of interest from the web documents. Such pre-processing steps are performed using so-called information extractors, which are software components that are typically configured by means of rules that are tailored to extracting the information of interest from a web page and structuring it according to a pre-defined schema. Paramount to getting good mining results is that the technique used to extract the source information is exact, which requires to evaluate and compare the different proposals in the literature from an empirical point of view. According to Google Scholar, about 4 200 papers on information extraction have been published during the last decade. Unfortunately, they were not evaluated within a homogeneous framework, which leads to difficulties to compare them empirically. In this paper, we report on an original information extraction evaluation method. Our contribution is three-fold: a) this is the first attempt to provide an evaluation method for proposals that work on semi-structured documents; the little existing work on this topic focuses on proposals that work on free text, which has little to do with extracting information from semi-structured documents. b) It provides a method that relies on statistically sound tests to support the conclusions drawn; the previous work does not provide clear guidelines or recommend statistically sound tests, but rather a survey that collects many features to take into account as well as related work; c) We provide a novel method to compute the performance measures regarding unsupervised proposals; otherwise they would require the intervention of a user to compute them by using the annotations on the evaluation sets and the information extracted. Our contributions will definitely help researchers in this area make sure that they have advanced the state of the art not only conceptually, but from an empirical point of view; it will also help practitioners make informed decisions on which proposal is the most adequate for a particular problem. This conference is a good forum to discuss on our ideas so that we can spread them to help improve the evaluation of information extraction proposals and gather valuable feedback from other researchers.

Keywords: web information extractors, information extraction evaluation method, Google scholar, web

Procedia PDF Downloads 245
2372 Sustainable Production of Algae through Nutrient Recovery in the Biofuel Conversion Process

Authors: Bagnoud-Velásquez Mariluz, Damergi Eya, Grandjean Dominique, Frédéric Vogel, Ludwig Christian

Abstract:

The sustainability of algae to biofuel processes is seriously affected by the energy intensive production of fertilizers. Large amounts of nitrogen and phosphorus are required for a large-scale production resulting in many cases in a negative impact of the limited mineral resources. In order to meet the algal bioenergy opportunity it appears crucial the promotion of processes applying a nutrient recovery and/or making use of renewable sources including waste. Hydrothermal (HT) conversion is a promising and suitable technology for microalgae to generate biofuels. Besides the fact that water is used as a “green” reactant and solvent and that no biomass drying is required, the technology offers a great potential for nutrient recycling. This study evaluated the possibility to treat the water HT effluent by the growth of microalgae while producing renewable algal biomass. As already demonstrated in previous works by the authors, the HT aqueous product besides having N, P and other important nutrients, presents a small fraction of organic compounds rarely studied. Therefore, extracted heteroaromatic compounds in the HT effluent were the target of the present research; they were profiled using GC-MS and LC-MS-MS. The results indicate the presence of cyclic amides, piperazinediones, amines and their derivatives. The most prominent nitrogenous organic compounds (NOC’s) in the extracts were carefully examined by their effect on microalgae, namely 2-pyrrolidinone and β-phenylethylamine (β-PEA). These two substances were prepared at three different concentrations (10, 50 and 150 ppm). This toxicity bioassay used three different microalgae strains: Phaeodactylum tricornutum, Chlorella sorokiniana and Scenedesmus vacuolatus. The confirmed IC50 was for all cases ca. 75ppm. Experimental conditions were set up for the growth of microalgae in the aqueous phase by adjusting the nitrogen concentration (the key nutrient for algae) to fit that one established for a known commercial medium. The values of specific NOC’s were lowered at concentrations of 8.5 mg/L 2-pyrrolidinone; 1mg/L δ-valerolactam and 0.5 mg/L β-PEA. The growth with the diluted HT solution was kept constant with no inhibition evidence. An additional ongoing test is addressing the possibility to apply an integrated water cleanup step making use of the existent hydrothermal catalytic facility.

Keywords: hydrothermal process, microalgae, nitrogenous organic compounds, nutrient recovery, renewable biomass

Procedia PDF Downloads 402
2371 Effects of Using a Recurrent Adverse Drug Reaction Prevention Program on Safe Use of Medicine among Patients Receiving Services at the Accident and Emergency Department of Songkhla Hospital Thailand

Authors: Thippharat Wongsilarat, Parichat tuntilanon, Chonlakan Prataksitorn

Abstract:

Recurrent adverse drug reactions are harmful to patients with mild to fatal illnesses, and affect not only patients but also their relatives, and organizations. To compare safe use of medicine among patients before and after using the recurrent adverse drug reaction prevention program . Quasi-experimental research with the target population of 598 patients with drug allergy history. Data were collected through an observation form tested for its validity by three experts (IOC = 0.87), and analyzed with a descriptive statistic (percentage). The research was conducted jointly with a multidisciplinary team to analyze and determine the weak points and strong points in the recurrent adverse drug reaction prevention system during the past three years, and 546, 329, and 498 incidences, respectively, were found. Of these, 379, 279, and 302 incidences, or 69.4; 84.80; and 60.64 percent of the patients with drug allergy history, respectively, were found to have caused by incomplete warning system. In addition, differences in practice in caring for patients with drug allergy history were found that did not cover all the steps of the patient care process, especially a lack of repeated checking, and a lack of communication between the multidisciplinary team members. Therefore, the recurrent adverse drug reaction prevention program was developed with complete warning points in the information technology system, the repeated checking step, and communication among related multidisciplinary team members starting from the hospital identity card room, patient history recording officers, nurses, physicians who prescribe the drugs, and pharmacists. Including in the system were surveillance, nursing, recording, and linking the data to referring units. There were also training concerning adverse drug reactions by pharmacists, monthly meetings to explain the process to practice personnel, creating safety culture, random checking of practice, motivational encouragement, supervising, controlling, following up, and evaluating the practice. The rate of prescribing drugs to which patients were allergic per 1,000 prescriptions was 0.08, and the incidence rate of recurrent drug reaction per 1,000 prescriptions was 0. Surveillance of recurrent adverse drug reactions covering all service providing points can ensure safe use of medicine for patients.

Keywords: recurrent drug, adverse reaction, safety, use of medicine

Procedia PDF Downloads 451
2370 Anticancer Effect of Isolated from the Methanolic Extract of Triticum Aestivum Straw in Mice

Authors: Savita Dixit

Abstract:

Rutin is the bioactive flavonoid isolated from the straw part of Triticum aestivum and possess various pharmacological applications. The aim of this study is to evaluate the chemopreventive potential of rutin in an experimental skin carcinogenesis mice model system. Skin tumor was induced by topical application of 7, 12-dimethyl benz(a) anthracene (DMBA) and promoted by croton oil in Swiss albino mice. To assess the chemopreventive potential of rutin, it was orally administered at a concentration of (200 mg/kg and 400 mg/kg body weight) continued three times weekly for 16th weeks. The development of skin carcinogenesis was assessed by histopathological analysis. Reductions in tumor size and cumulative number of papillomas were seen due to rutin treatment. Average latent period was significantly increased as compared to carcinogen-treated control. Rutin produced a significant decrease in the activity of serum enzyme serum glutamate oxalate transaminase (SGOT), serum glutamate pyruvate transaminase (SGPT), alkaline phosphatase (ALP) and bilirubin when compared with the control. They significantly increased the levels of enzyme involved in oxidative stress glutathione (GSH), superoxide dismutase (SOD) and catalase. The elevated level of lipid peroxidase in the control group was significantly inhibited by rutin administration. The results of the present study suggest the chemopreventive effect of rutin in DMBA and croton oil-induced skin carcinogenesis in swiss albino mice and one of the probable reasons would be its antioxidant potential.

Keywords: chemoprevention, papilloma, rutin, skin carcinogenesis

Procedia PDF Downloads 335
2369 Effects of Paternity: A Comparative Study to Analyze the Organization's Support in the Psychological Development of Children in India and USA

Authors: Aayushi Dalal

Abstract:

It is the mother who bears the child in her womb for 9 months. It is typically rooted in the Indian culture that it is solely the responsibility of women to take care of the children and as a result the gender roles are stereotyped. Instead of a 50-50 partnership in parenting the child, it is hackneyed that men take the responsibility of the bread earner while women nurture the children by staying at home. Thus, mothers are considered to be more psychologically connected to the children than fathers. But the current society is observing role dilution of parents which can create a gap in understanding from the organization’s perspective. This is the basis of the study. The emergence of women into the job market has forever changed how society views the traditional roles of fathers and mothers. Feminism and financial power has reformed the classic parenting model. This has given rise to a more open and flexible society consequently emphasizing the father's importance in the emotional well being of the child while also being capable caretakers and disciplinarians. This study focuses on analyzing the comparative differences of the father's role in the psychological development of the child in India and USA while taking into consideration the organization’s support towards them. A sample size of 150 fathers- 75 from India and 75 from USA was selected and a structured survey was carried out which had several open ended as well as closed ended questions probing to the issue. It was made sure that the environmental factors had as minimal effect as possible on the subjects. The findings of this research would materialize a framework for fathers to understand the magnitude of their role in their child's upbringing. This would not only ameliorate the "father-child" relationship but also make organization more sympathetic towards their employees.

Keywords: paternity, child development, psychology, gender role, organization policy

Procedia PDF Downloads 213
2368 Prospective Relations of Childhood Maltreatment, Temperament and Delinquency among Prisoners: Moderated Mediation Effect of Age and Education

Authors: Razia Anjum, Zaqia Bano, Chan Wai

Abstract:

Temperament has been described as a multifaceted and potentially value-laden construct in literature but there is scarcity of research work in area of forensic psychology predominantly in south Asian countries. Present exposition explored the mediated effect of temperament towards the childhood maltreatment and delinquency. Further the moderated effect of prisoner’s age and education will be examined. Variable System for Windows 1.3 version was used to analyze the data provided by 517 prisoners (407 males, 110 females) from four districts prisons situated at Pakistan. Cross sectional research design was used in this study and representative sample was approached through purposive sampling technique. Only those prisoners were the part of study who maltreated in their childhood in form of physical abuse, psychological abuse, sexual abuse or experienced the emotional neglect. After exploration the childhood adversities through ‘Child Abuse Self-Report Scale’, then the prisoner’s temperament styles were explored through ‘Adult Temperament Scale’. Later on, the investigation with particular to the delinquent behaviors was carried out. The findings suggested that the presence of four temperamental styles (choleric, melancholic, phlegmatic, and sanguine) mediated the childhood maltreatment-delinquency relationship in late adulthood but not in early adulthood. Marked exploration was the significant moderated effect of Prisoner’s age and their level of education that effect the relationship of temperament towards the childhood maltreatment and the delinquency, in this way results are consistent with views on cumulative pathways to delinquency that undergone through the effect of childhood maltreatment. Results indicated that Choleric, Melancholic temperament was the positive predictor of delinquency, whereas. The Phlegmatic and Sanguine temperament were the negative predictor of delinquency, in this way, different types of temperament left an indelible trace on delinquency that can work out by modifying the individual temperament. On the basis of results, it could be concluded that inclination towards the delinquent behaviors including theft, drug abuse, lying, noncompliance behavior, police encounter, violence, cheating, gambling, harassment, homosexuality and heterosexuality could be minimized if properly screen out the temperament. Moreover, study determined the two other significant moderated effect of age towards the involvement in delinquent behaviors and moderated effect of education towards childhood maltreatment and the temperament. Findings suggested that with marked increase in number of years in age the probability to get involve in delinquent behaviors will decrease and the result was consistent with the assumption that education can work as buffered to maximize or minimize the effect of trauma and can shape the temperament accordingly. Results are consistent with views on cumulative disadvantage with the socio-psychological faultiness of community.

Keywords: delinquent behaviors, temperament, prisoners, moderated mediation analysis

Procedia PDF Downloads 102
2367 Optimization of Multi Commodities Consumer Supply Chain: Part 1-Modelling

Authors: Zeinab Haji Abolhasani, Romeo Marian, Lee Luong

Abstract:

This paper and its companions (Part II, Part III) will concentrate on optimizing a class of supply chain problems known as Multi- Commodities Consumer Supply Chain (MCCSC) problem. MCCSC problem belongs to production-distribution (P-D) planning category. It aims to determine facilities location, consumers’ allocation, and facilities configuration to minimize total cost (CT) of the entire network. These facilities can be manufacturer units (MUs), distribution centres (DCs), and retailers/end-users (REs) but not limited to them. To address this problem, three major tasks should be undertaken. At the first place, a mixed integer non-linear programming (MINP) mathematical model is developed. Then, system’s behaviors under different conditions will be observed using a simulation modeling tool. Finally, the most optimum solution (minimum CT) of the system will be obtained using a multi-objective optimization technique. Due to the large size of the problem, and the uncertainties in finding the most optimum solution, integration of modeling and simulation methodologies is proposed followed by developing new approach known as GASG. It is a genetic algorithm on the basis of granular simulation which is the subject of the methodology of this research. In part II, MCCSC is simulated using discrete-event simulation (DES) device within an integrated environment of SimEvents and Simulink of MATLAB® software package followed by a comprehensive case study to examine the given strategy. Also, the effect of genetic operators on the obtained optimal/near optimal solution by the simulation model will be discussed in part III.

Keywords: supply chain, genetic algorithm, optimization, simulation, discrete event system

Procedia PDF Downloads 308
2366 Reconceptualizing “Best Practices” in Public Sector

Authors: Eftychia Kessopoulou, Styliani Xanthopoulou, Ypatia Theodorakioglou, George Tsiotras, Katerina Gotzamani

Abstract:

Public sector managers frequently herald that implementing best practices as a set of standards, may lead to superior organizational performance. However, recent research questions the objectification of best practices, highlighting: a) the inability of public sector organizations to develop innovative administrative practices, as well as b) the adoption of stereotypical renowned practices inculcated in the public sector by international governance bodies. The process through which organizations construe what a best practice is, still remains a black box that is yet to be investigated, given the trend of continuous changes in public sector performance, as well as the burgeoning interest of sharing popular administrative practices put forward by international bodies. This study aims to describe and understand how organizational best practices are constructed by public sector performance management teams, like benchmarkers, during the benchmarking-mediated performance improvement process and what mechanisms enable this construction. A critical realist action research methodology is employed, starting from a description of various approaches on best practice nature when a benchmarking-mediated performance improvement initiative, such as the Common Assessment Framework, is applied. Firstly, we observed the benchmarker’s management process of best practices in a public organization, so as to map their theories-in-use. As a second step we contextualized best administrative practices by reflecting the different perspectives emerged from the previous stage on the design and implementation of an interview protocol. We used this protocol to conduct 30 semi-structured interviews with “best practice” process owners, in order to examine their experiences and performance needs. Previous research on best practices has shown that needs and intentions of benchmarkers cannot be detached from the causal mechanisms of the various contexts in which they work. Such causal mechanisms can be found in: a) process owner capabilities, b) the structural context of the organization, and c) state regulations. Therefore, we developed an interview protocol theoretically informed in the first part to spot causal mechanisms suggested by previous research studies and supplemented it with questions regarding the provision of best practice support from the government. Findings of this work include: a) a causal account of the nature of best administrative practices in the Greek public sector that shed light on explaining their management, b) a description of the various contexts affecting best practice conceptualization, and c) a description of how their interplay changed the organization’s best practice management.

Keywords: benchmarking, action research, critical realism, best practices, public sector

Procedia PDF Downloads 121
2365 A Survey of Field Programmable Gate Array-Based Convolutional Neural Network Accelerators

Authors: Wei Zhang

Abstract:

With the rapid development of deep learning, neural network and deep learning algorithms play a significant role in various practical applications. Due to the high accuracy and good performance, Convolutional Neural Networks (CNNs) especially have become a research hot spot in the past few years. However, the size of the networks becomes increasingly large scale due to the demands of the practical applications, which poses a significant challenge to construct a high-performance implementation of deep learning neural networks. Meanwhile, many of these application scenarios also have strict requirements on the performance and low-power consumption of hardware devices. Therefore, it is particularly critical to choose a moderate computing platform for hardware acceleration of CNNs. This article aimed to survey the recent advance in Field Programmable Gate Array (FPGA)-based acceleration of CNNs. Various designs and implementations of the accelerator based on FPGA under different devices and network models are overviewed, and the versions of Graphic Processing Units (GPUs), Application Specific Integrated Circuits (ASICs) and Digital Signal Processors (DSPs) are compared to present our own critical analysis and comments. Finally, we give a discussion on different perspectives of these acceleration and optimization methods on FPGA platforms to further explore the opportunities and challenges for future research. More helpfully, we give a prospect for future development of the FPGA-based accelerator.

Keywords: deep learning, field programmable gate array, FPGA, hardware accelerator, convolutional neural networks, CNN

Procedia PDF Downloads 122
2364 Linguistic Analysis of Argumentation Structures in Georgian Political Speeches

Authors: Mariam Matiashvili

Abstract:

Argumentation is an integral part of our daily communications - formal or informal. Argumentative reasoning, techniques, and language tools are used both in personal conversations and in the business environment. Verbalization of the opinions requires the use of extraordinary syntactic-pragmatic structural quantities - arguments that add credibility to the statement. The study of argumentative structures allows us to identify the linguistic features that make the text argumentative. Knowing what elements make up an argumentative text in a particular language helps the users of that language improve their skills. Also, natural language processing (NLP) has become especially relevant recently. In this context, one of the main emphases is on the computational processing of argumentative texts, which will enable the automatic recognition and analysis of large volumes of textual data. The research deals with the linguistic analysis of the argumentative structures of Georgian political speeches - particularly the linguistic structure, characteristics, and functions of the parts of the argumentative text - claims, support, and attack statements. The research aims to describe the linguistic cues that give the sentence a judgmental/controversial character and helps to identify reasoning parts of the argumentative text. The empirical data comes from the Georgian Political Corpus, particularly TV debates. Consequently, the texts are of a dialogical nature, representing a discussion between two or more people (most often between a journalist and a politician). The research uses the following approaches to identify and analyze the argumentative structures Lexical Classification & Analysis - Identify lexical items that are relevant in argumentative texts creating process - Creating the lexicon of argumentation (presents groups of words gathered from a semantic point of view); Grammatical Analysis and Classification - means grammatical analysis of the words and phrases identified based on the arguing lexicon. Argumentation Schemas - Describe and identify the Argumentation Schemes that are most likely used in Georgian Political Speeches. As a final step, we analyzed the relations between the above mentioned components. For example, If an identified argument scheme is “Argument from Analogy”, identified lexical items semantically express analogy too, and they are most likely adverbs in Georgian. As a result, we created the lexicon with the words that play a significant role in creating Georgian argumentative structures. Linguistic analysis has shown that verbs play a crucial role in creating argumentative structures.

Keywords: georgian, argumentation schemas, argumentation structures, argumentation lexicon

Procedia PDF Downloads 66