Search results for: profit and loss sharing small scale businesses
11795 Contactless and Multiple Space Debris Removal by Micro to Nanno Satellites
Authors: Junichiro Kawaguchi
Abstract:
Space debris problems have emerged and threatened the use of low earth orbit around the Earth owing to a large number of spacecraft. In debris removal, a number of research and patents have been proposed and published so far. They assume servicing spacecraft, robots to be built for accessing the target debris objects. The robots should be sophisticated enough automatically to access the debris articulating the attitude and the translation motion with respect to the debris. This paper presents the idea of using the torpedo-like third unsophisticated and disposable body, in addition to the first body of the servicing robot and the second body of the target debris. The third body is launched from the first body from a distance farer than the size of the second body. This paper presents the method and the system, so that the third body is launched from the first body. The third body carries both a net and an inflatable or extendible drag deceleration device and is built small and light. This method enables even a micro to nano satellite to perform contactless and multiple debris removal even via a single flight.Keywords: ballute, debris removal, echo satellite, gossamer, gun-net, inflatable space structure, small satellite, un-cooperated target
Procedia PDF Downloads 12111794 Inclusion Body Refolding at High Concentration for Large-Scale Applications
Authors: J. Gabrielczyk, J. Kluitmann, T. Dammeyer, H. J. Jördening
Abstract:
High-level expression of proteins in bacteria often causes production of insoluble protein aggregates, called inclusion bodies (IB). They contain mainly one type of protein and offer an easy and efficient way to get purified protein. On the other hand, proteins in IB are normally devoid of function and therefore need a special treatment to become active. Most refolding techniques aim at diluting the solubilizing chaotropic agents. Unfortunately, optimal refolding conditions have to be found empirically for every protein. For large-scale applications, a simple refolding process with high yields and high final enzyme concentrations is still missing. The constructed plasmid pASK-IBA63b containing the sequence of fructosyltransferase (FTF, EC 2.4.1.162) from Bacillus subtilis NCIMB 11871 was transformed into E. coli BL21 (DE3) Rosetta. The bacterium was cultivated in a fed-batch bioreactor. The produced FTF was obtained mainly as IB. For refolding experiments, five different amounts of IBs were solubilized in urea buffer with protein concentration of 0.2-8.5 g/L. Solubilizates were refolded with batch or continuous dialysis. The refolding yield was determined by measuring the protein concentration of the clear supernatant before and after the dialysis. Particle size was measured by dynamic light scattering. We tested the solubilization properties of fructosyltransferase IBs. The particle size measurements revealed that the solubilization of the aggregates is achieved at urea concentration of 5M or higher and confirmed by absorption spectroscopy. All results confirm previous investigations that refolding yields are dependent upon initial protein concentration. In batch dialysis, the yields dropped from 67% to 12% and 72% to 19% for continuous dialysis, in relation to initial concentrations from 0.2 to 8.5 g/L. Often used additives such as sucrose and glycerol had no effect on refolding yields. Buffer screening indicated a significant increase in activity but also temperature stability of FTF with citrate/phosphate buffer. By adding citrate to the dialysis buffer, we were able to increase the refolding yields to 82-47% in batch and 90-74% in the continuous process. Further experiments showed that in general, higher ionic strength of buffers had major impact on refolding yields; doubling the buffer concentration increased the yields up to threefold. Finally, we achieved corresponding high refolding yields by reducing the chamber volume by 75% and the amount of buffer needed. The refolded enzyme had an optimal activity of 12.5±0.3 x104 units/g. However, detailed experiments with native FTF revealed a reaggregation of the molecules and loss in specific activity depending on the enzyme concentration and particle size. For that reason, we actually focus on developing a process of simultaneous enzyme refolding and immobilization. The results of this study show a new approach in finding optimal refolding conditions for inclusion bodies at high concentrations. Straightforward buffer screening and increase of the ionic strength can optimize the refolding yield of the target protein by 400%. Gentle removal of chaotrope with continuous dialysis increases the yields by an additional 65%, independent of the refolding buffer applied. In general time is the crucial parameter for successful refolding of solubilized proteins.Keywords: dialysis, inclusion body, refolding, solubilization
Procedia PDF Downloads 29411793 Impulsivity, Emotional Regulation, Problematic Mukbang Watching and Eating Disorders in University Students
Authors: Aqsa Butt, Nida Zafar
Abstract:
The study assesses the relationship between impulsivity, emotional regulation, problematic mukbang watching, and eating disorders in university students. It was hypothesized there is likely to be a relationship between impulsivity, emotional regulation, problematic mukbang watching, and eating disorders in university students; impulsivity and emotional regulation would predict problematic mukbang watching in university students; problematic mukbang watching would predict eating disorders in university students. A correlational research design was used. A sample of 200 students was taken from different public and private universities in Lahore. Emotional regulation questionnaire (Gross & John, 2003), Abbreviated Barrat Impulsiveness Scale (Christopher et al., 2014), Problematic Mukbang Watching Scale (Kircaburun et al., 2020), and Eating Disorder Diagnostic Scale (Stice et al., 2004) were used for assessment. Results showed a significant positive relationship between impulsivity and expressive suppression with problematic mukbang watching. However, there is a significant negative relationship between cognitive reappraisal and problematic mukbang watching. Problematic mukbang is significantly positively related to bulimia nervosa and binge eating. Furthermore, impulsivity and expressive suppression are significant positive predictors of problematic mukbang watching, and cognitive reappraisal is a significant negative predictor of problematic mukbang watching. Additionally, problematic mukbang watching significantly positively predicts bulimia nervosa and binge eating. The research has important implications for university students to understand that excessive watching of such videos can lead to eating disorders such as bulimia nervosa and binge eating. This research provides an understanding of the effects of Mukbang watching, and it also adds to the existing body of knowledge on eating disorders.Keywords: impulsivity, emotional regulation, problematic Mukbang watching eating disorders, students
Procedia PDF Downloads 6011792 Strategic Business Solutions for an Ageing SME
Authors: N. G. Teik Hiang, Fathyah Hashim
Abstract:
This is a case of how strategic management techniques can be used to help resolving problems faced by an ageing Small and Medium Enterprise (SME). Strategic way of resolving problems had been proven to be possible in this case despite general thought that strategic management is useful mostly for large corporations. Small and Medium Enterprises (SMEs) can also use strategic management in managing their business and determining their future cause of action and strategies in order to survive in this ever competent world. Strategic orientation is the key to survival and development of small and medium enterprises. In order to adapt to the fierce market competition, ageing SMEs should improve competitiveness and operational efficiency. They must therefore establish a sense of strategic management to improve the strategic management skills, combined with its own unique characteristics, and work out practical strategies to develop core competitiveness of enterprises in the fierce market competition in order to be sustainable. In this case, internal strengths and weaknesses of an SME had been identified. Strategic internal factors and external factors had been classified and further utilized to formulate potential strategies to encounter various problems faced by the SME. These strategies had been further match to take advantages of the opportunities and to overcome the weaknesses and minimize the threats it is facing. Tan, a consultant who was given the opportunity to formulate a plan for the business started with the environmental scanning (internal and external environmental analysis), assessing strengths and weaknesses for the company, strategies generation, analysis and evaluation. He had numerous discussions with the owner of the business and the senior management in order to match the key internal and external factors to formulate alternative strategies for solving the problems that the company facing. Some of the recommendations or solutions are generated from the inspiration of the owner of the business who is a very enterprising and experience businessman.Keywords: strategic orientation, strategic management, SME, core competitiveness, sustainable
Procedia PDF Downloads 41911791 Rapid and Long-term Alien Language Analysis - Forming Frameworks for the Interpretation of Alien Communication for More Intelligent Life
Authors: Samiksha Raviraja, Junaid Arif
Abstract:
One of the most important abilities in species is the ability to communicate. This paper proposes steps to take when and if aliens came in contact with humans, and how humans would communicate with them. The situation would be a time-sensitive scenario, meaning that communication is at the utmost importance if such an event were to happen. First, humans would need to establish mutual peace by conveying that there is no threat to the alien race. Second, the aliens would need to acknowledge this understanding and reciprocate. This would be extremely difficult to do regardless of their intelligence level unless they are very human-like and have similarities to our way of communicating. The first step towards understanding their mind is to analyze their level of intelligence - Level 1-Low intelligence, Level 2-Human-like intelligence or Level 3-Advanced or High Intelligence. These three levels go hand in hand with the Kardashev scale. Further, the Barrow scale will also be used to categorize alien species in hopes of developing a common universal language to communicate in. This paper will delve into how the level of intelligence can be used toward achieving communication with aliens by predicting various possible scenarios and outcomes by proposing an intensive categorization system. This can be achieved by studying their Emotional and Intelligence Quotient (along with technological and scientific knowledge/intelligence). The limitations and capabilities of their intelligence must also be studied. By observing how they respond and react (expressions and senses) to different kinds of scenarios, items and people, the data will help enable good categorisation. It can be hypothesised that the more human-like aliens are or can relate to humans, the more likely it is that communication is possible. Depending on the situation, either human can teach aliens a human language, or humans can learn an alien language, or both races work together to develop a mutual understanding or mode of communication. There are three possible ways of contact. Aliens visit Earth, or humans discover aliens while on space exploration or through technology in the form of signals. A much rarer case would be humans and aliens running into each other during a space expedition of their own. The first two possibilities allow a more in-depth analysis of the alien life and enhanced results compared. The importance of finding a method of talking with aliens is important in order to not only protect Earth and humans but rather for the advancement of Science through the shared knowledge between the two species.Keywords: intelligence, Kardashev scale, Barrow scale, alien civilizations, emotional and intelligence quotient
Procedia PDF Downloads 7211790 A Pilot Study on Integration of Simulation in the Nursing Educational Program: Hybrid Simulation
Authors: Vesile Unver, Tulay Basak, Hatice Ayhan, Ilknur Cinar, Emine Iyigun, Nuran Tosun
Abstract:
The aim of this study is to analyze the effects of the hybrid simulation. In this simulation, types standardized patients and task trainers are employed simultaneously. For instance, in order to teach the IV activities standardized patients and IV arm models are used. The study was designed as a quasi-experimental research. Before the implementation an ethical permission was taken from the local ethical commission and administrative permission was granted from the nursing school. The universe of the study included second-grade nursing students (n=77). The participants were selected through simple random sample technique and total of 39 nursing students were included. The views of the participants were collected through a feedback form with 12 items. The form was developed by the authors and “Patient intervention self-confidence/competence scale”. Participants reported advantages of the hybrid simulation practice. Such advantages include the following: developing connections between the simulated scenario and real life situations in clinical conditions; recognition of the need for learning more about clinical practice. They all stated that the implementation was very useful for them. They also added three major gains; improvement of critical thinking skills (94.7%) and the skill of making decisions (97.3%); and feeling as if a nurse (92.1%). In regard to the mean scores of the participants in the patient intervention self-confidence/competence scale, it was found that the total mean score for the scale was 75.23±7.76. The findings obtained in the study suggest that the hybrid simulation has positive effects on the integration of theoretical and practical activities before clinical activities for the nursing students.Keywords: hybrid simulation, clinical practice, nursing education, nursing students
Procedia PDF Downloads 29311789 Trajectories of Depression Anxiety and Stress among Breast Cancer Patients: Assessment at First Year of Diagnosis
Authors: Jyoti Srivastava, Sandhya S. Kaushik, Mallika Tewari, Hari S. Shukla
Abstract:
Little information is available about the development of psychological well being over time among women who have been undergoing treatment for breast cancer. The aim of this study was to identify the trajectories of depression anxiety and stress among women with early-stage breast cancer. Of the 48 Indian women with newly diagnosed early-stage breast cancer recruited from surgical oncology unit, 39 completed an interview and were assessed for depression anxiety and stress (Depression Anxiety Stress Scale-DASS 21) before their first course of chemotherapy (baseline) and follow up interviews at 3, 6 and 9 months thereafter. Growth mixture modeling was used to identify distinct trajectories of Depression Anxiety and Stress symptoms. Logistic Regression analysis was used to evaluate the characteristics of women in distinct groups. Most women showed mild to moderate level of depression and anxiety (68%) while normal to mild level of stress (71%). But one in 11 women was chronically anxious (9%) and depressed (9%). Young age, having a partner, shorter education and receiving chemotherapy but not radiotherapy might characterize women whose psychological symptoms remain strong nine months after diagnosis. By looking beyond the mean, it was found that several socio-demographic and treatment factors characterized the women whose depression, anxiety and stress level remained severe even nine months after diagnosis. The results suggest that support provided to cancer patients should have a special focus on a relatively small group of patient most in need.Keywords: psychological well being, growth mixture modeling, logistic regression analysis, socio-demographic factors
Procedia PDF Downloads 14711788 An Exploratory Study of Nasik Small and Medium Enterprises Cluster
Authors: Pragya Bhawsar, Utpal Chattopadhyay
Abstract:
Small and Medium Enterprises play crucial role in contributing to economic objectives of an emerging nation. To support SMEs, the idea of creation of clusters has been prevalent since past two decades. In this paper, an attempt has been done to explore the impact of being in the cluster on the competitiveness of SMEs. To meet the objective, Nasik Cluster (India) has been selected. The information was collected by means of two focus group discussions and survey of thirty SMEs. The finding generates interest revealing the fact that under the concept ‘Cluster’ a lot of ambiguity flourish. Besides the problems and opportunities of the firms in the cluster the results bring to notice that the benefits of clusterization can only reach to SMEs when the whole location can be considered/understood as a cluster, rather than many subsets (various forms of clusters) prevailing under it. Fostering such an understanding calls for harmony among the various stakeholders of the clusters. The dynamics of interaction among government, local industry associations, relevant institutions, large firms and finally SMEs which makes the most of the location based cluster, are significant in shaping the host cluster’s competitiveness and vice versa.Keywords: SMEs, industry clusters, common facility centres, co-creation, policy
Procedia PDF Downloads 29411787 The Processing of Context-Dependent and Context-Independent Scalar Implicatures
Authors: Liu Jia’nan
Abstract:
The default accounts hold the view that there exists a kind of scalar implicature which can be processed without context and own a psychological privilege over other scalar implicatures which depend on context. In contrast, the Relevance Theorist regards context as a must because all the scalar implicatures have to meet the need of relevance in discourse. However, in Katsos, the experimental results showed: Although quantitatively the adults rejected under-informative utterance with lexical scales (context-independent) and the ad hoc scales (context-dependent) at almost the same rate, adults still regarded the violation of utterance with lexical scales much more severe than with ad hoc scales. Neither default account nor Relevance Theory can fully explain this result. Thus, there are two questionable points to this result: (1) Is it possible that the strange discrepancy is due to other factors instead of the generation of scalar implicature? (2) Are the ad hoc scales truly formed under the possible influence from mental context? Do the participants generate scalar implicatures with ad hoc scales instead of just comparing semantic difference among target objects in the under- informative utterance? In my Experiment 1, the question (1) will be answered by repetition of Experiment 1 by Katsos. Test materials will be showed by PowerPoint in the form of pictures, and each procedure will be done under the guidance of a tester in a quiet room. Our Experiment 2 is intended to answer question (2). The test material of picture will be transformed into the literal words in DMDX and the target sentence will be showed word-by-word to participants in the soundproof room in our lab. Reading time of target parts, i.e. words containing scalar implicatures, will be recorded. We presume that in the group with lexical scale, standardized pragmatically mental context would help generate scalar implicature once the scalar word occurs, which will make the participants hope the upcoming words to be informative. Thus if the new input after scalar word is under-informative, more time will be cost for the extra semantic processing. However, in the group with ad hoc scale, scalar implicature may hardly be generated without the support from fixed mental context of scale. Thus, whether the new input is informative or not does not matter at all, and the reading time of target parts will be the same in informative and under-informative utterances. People’s mind may be a dynamic system, in which lots of factors would co-occur. If Katsos’ experimental result is reliable, will it shed light on the interplay of default accounts and context factors in scalar implicature processing? We might be able to assume, based on our experiments, that one single dominant processing paradigm may not be plausible. Furthermore, in the processing of scalar implicature, the semantic interpretation and the pragmatic interpretation may be made in a dynamic interplay in the mind. As to the lexical scale, the pragmatic reading may prevail over the semantic reading because of its greater exposure in daily language use, which may also lead the possible default or standardized paradigm override the role of context. However, those objects in ad hoc scale are not usually treated as scalar membership in mental context, and thus lexical-semantic association of the objects may prevent their pragmatic reading from generating scalar implicature. Only when the sufficient contextual factors are highlighted, can the pragmatic reading get privilege and generate scalar implicature.Keywords: scalar implicature, ad hoc scale, dynamic interplay, default account, Mandarin Chinese processing
Procedia PDF Downloads 32311786 Addressing Supply Chain Data Risk with Data Security Assurance
Authors: Anna Fowler
Abstract:
When considering assets that may need protection, the mind begins to contemplate homes, cars, and investment funds. In most cases, the protection of those assets can be covered through security systems and insurance. Data is not the first thought that comes to mind that would need protection, even though data is at the core of most supply chain operations. It includes trade secrets, management of personal identifiable information (PII), and consumer data that can be used to enhance the overall experience. Data is considered a critical element of success for supply chains and should be one of the most critical areas to protect. In the supply chain industry, there are two major misconceptions about protecting data: (i) We do not manage or store confidential/personally identifiable information (PII). (ii) Reliance on Third-Party vendor security. These misconceptions can significantly derail organizational efforts to adequately protect data across environments. These statistics can be exciting yet overwhelming at the same time. The first misconception, “We do not manage or store confidential/personally identifiable information (PII)” is dangerous as it implies the organization does not have proper data literacy. Enterprise employees will zero in on the aspect of PII while neglecting trade secret theft and the complete breakdown of information sharing. To circumvent the first bullet point, the second bullet point forges an ideology that “Reliance on Third-Party vendor security” will absolve the company from security risk. Instead, third-party risk has grown over the last two years and is one of the major causes of data security breaches. It is important to understand that a holistic approach should be considered when protecting data which should not involve purchasing a Data Loss Prevention (DLP) tool. A tool is not a solution. To protect supply chain data, start by providing data literacy training to all employees and negotiating the security component of contracts with vendors to highlight data literacy training for individuals/teams that may access company data. It is also important to understand the origin of the data and its movement to include risk identification. Ensure processes effectively incorporate data security principles. Evaluate and select DLP solutions to address specific concerns/use cases in conjunction with data visibility. These approaches are part of a broader solutions framework called Data Security Assurance (DSA). The DSA Framework looks at all of the processes across the supply chain, including their corresponding architecture and workflows, employee data literacy, governance and controls, integration between third and fourth-party vendors, DLP as a solution concept, and policies related to data residency. Within cloud environments, this framework is crucial for the supply chain industry to avoid regulatory implications and third/fourth party risk.Keywords: security by design, data security architecture, cybersecurity framework, data security assurance
Procedia PDF Downloads 8911785 The Three-Zone Composite Productivity Model of Multi-Fractured Horizontal Wells under Different Diffusion Coefficients in a Shale Gas Reservoir
Authors: Weiyao Zhu, Qian Qi, Ming Yue, Dongxu Ma
Abstract:
Due to the nano-micro pore structures and the massive multi-stage multi-cluster hydraulic fracturing in shale gas reservoirs, the multi-scale seepage flows are much more complicated than in most other conventional reservoirs, and are crucial for the economic development of shale gas. In this study, a new multi-scale non-linear flow model was established and simplified, based on different diffusion and slip correction coefficients. Due to the fact that different flow laws existed between the fracture network and matrix zone, a three-zone composite model was proposed. Then, according to the conformal transformation combined with the law of equivalent percolation resistance, the productivity equation of a horizontal fractured well, with consideration given to diffusion, slip, desorption, and absorption, was built. Also, an analytic solution was derived, and the interference of the multi-cluster fractures was analyzed. The results indicated that the diffusion of the shale gas was mainly in the transition and Fick diffusion regions. The matrix permeability was found to be influenced by slippage and diffusion, which was determined by the pore pressure and diameter according to the Knudsen number. It was determined that, with the increased half-lengths of the fracture clusters, flow conductivity of the fractures, and permeability of the fracture network, the productivity of the fractured well also increased. Meanwhile, with the increased number of fractures, the distance between the fractures decreased, and the productivity slowly increased due to the mutual interference of the fractures. In regard to the fractured horizontal wells, the free gas was found to majorly contribute to the productivity, while the contribution of the desorption increased with the increased pressure differences.Keywords: multi-scale, fracture network, composite model, productivity
Procedia PDF Downloads 27011784 Modified Side Plate Design to Suppress Lateral Torsional Buckling of H-Beam for Seismic Application
Authors: Erwin, Cheng-Cheng Chen, Charles J. Salim
Abstract:
One of the method to solve the lateral torsional buckling (LTB) problem is by using side plates to increased the buckling resistance of the beam. Some modifications in designing the side plates are made in this study to simplify the construction in the field and reduce the cost. At certain region, side plates are not added: (1) At the beam end to preserve some spaces for bolt installation, but the beam is strengthened by adding cover plate at both flanges and (2) at the middle span of the beam where the moment is smaller. Three small scale full span beam specimens are tested under cyclic loading to investigate the LTB resistant and the ductility of the proposed design method. Test results show that the LTB deformation can be effectively suppressed and very high ductility level can be achieved. Following the test, a finite element analysis (FEA) model is established and is verified using the test results. An intensive parametric study is conducted using the established FEA model. The analysis reveals that the length of side plates is the most important parameter determining the performance of the beam and the required side plates length is determined by some parameters which are (1) beam depth to flange width ratio, (2) beam slenderness ratio (3) strength and thickness of the side plates, (4) compactness of beam web and flange, and (5) beam yield strength. At the end of the paper, a design formula to calculate the required side plate length is suggested.Keywords: cover plate, earthquake resistant design, lateral torsional buckling, side plate, steel structure
Procedia PDF Downloads 17511783 Green Function and Eshelby Tensor Based on Mindlin’s 2nd Gradient Model: An Explicit Study of Spherical Inclusion Case
Authors: A. Selmi, A. Bisharat
Abstract:
Using Fourier transform and based on the Mindlin's 2nd gradient model that involves two length scale parameters, the Green's function, the Eshelby tensor, and the Eshelby-like tensor for a spherical inclusion are derived. It is proved that the Eshelby tensor consists of two parts; the classical Eshelby tensor and a gradient part including the length scale parameters which enable the interpretation of the size effect. When the strain gradient is not taken into account, the obtained Green's function and Eshelby tensor reduce to its analogue based on the classical elasticity. The Eshelby tensor in and outside the inclusion, the volume average of the gradient part and the Eshelby-like tensor are explicitly obtained. Unlike the classical Eshelby tensor, the results show that the components of the new Eshelby tensor vary with the position and the inclusion dimensions. It is demonstrated that the contribution of the gradient part should not be neglected.Keywords: Eshelby tensor, Eshelby-like tensor, Green’s function, Mindlin’s 2nd gradient model, spherical inclusion
Procedia PDF Downloads 27011782 Effect of Dietary Inclusion of Moringa oleifera Leaf Meal on Blood Biochemical Changes and Lipid Profile of Vanaraja Chicken in Tropics
Authors: Kaushalendra Kumar, Abhishek Kumar, Chandra Moni, Sanjay Kumar, P. K. Singh, Ajeet Kumar
Abstract:
Present study investigated the dietary inclusion of Moringa oleifera leaf meal (MOLM) on production efficiency, hemato-biochemical profile and economy of Vanaraja birds under tropical condition. Experiment was conducted for a period of 56 days on 300 Vanaraja birds randomly divided in to five different experimental groups including control of 60 birds each group replicated with 20 chicks in each replicate. T1, T2, T3, T4, and T5 were offered with 0, 5, 10, 15, and 20% Moringa oleifera leaf meal along with basal ration. All the standard managemental practices were followed during experimental period including vaccination schedule. Locally available Moringa oleifera leaves were harvested at mature stage and allowed to dry under shady and aerated conditions. Thereafter, dried leaves were milled to make a leaf meal and stored in the airtight nylon bags to avoid any possible contamination from foreign material and use for experiment. Production parameters were calculated based on the amount of feed consumed and weight gain every weeks. The body weight gain of T2 group was significantly (P < 0.05) higher side whereas T3 group was comparable with control. The feed conversion ratio for T2 group was found to be significantly (P < 0.05) lower than all other treatment groups, while none of the group was comparable with each other. At the end of the experiment blood samples were collected from birds for haematology study while serum biochemistry performed using spectrophotometer following statndard protocols. The haematological attributes were significantly (P > 0.05) not differed among the groups. However, serum biochemistry showed significant reduction (P < 0.05) of blood urea nitrogen, uric acid and creatinine level with higher level of MOLM diet, indicates better utilization of protein supplemented through MOLM. The total cholesterol and triglyceride level was declined significantly (P < 0.05) as compare to control group with increased level of MOLM in basal diet, decreasing trend of serum cholesterol noted. However, value of HDL for T3 group was highest and for T1 group was lowest but no significant difference (P < 0.05) found among the groups. It might be due to presence of β-sitosterol a bioactive compound present in MOLM which causes lowering of plasma concentration of LDL. During experiment total, LDL and VLDL level was found to be decreased significantly (P < 0.05) as compare to control group. It was observed that the production efficiency of birds significantly improved with 5% followed by 10% Moringa oleifera leaf meal among the treatment groups. However, the maximum profit per kg live weight was noted in 10 % level and least profit observed in 20% MOLM fed group. It was concluded that the dietary inclusion of MOLM improved overall performances without affecting metabolic status and effective in reducing cholesterol level reflects healthy chicken production for human consumption.Keywords: hemato biochemistry, Moringa oleifera leaf meal, performance, Vanaraja birds
Procedia PDF Downloads 20711781 Intelligent Minimal Allocation of Capacitors in Distribution Networks Using Genetic Algorithm
Authors: S. Neelima, P. S. Subramanyam
Abstract:
A distribution system is an interface between the bulk power system and the consumers. Among these systems, radial distributions system is popular because of low cost and simple design. In distribution systems, the voltages at buses reduces when moved away from the substation, also the losses are high. The reason for a decrease in voltage and high losses is the insufficient amount of reactive power, which can be provided by the shunt capacitors. But the placement of the capacitor with an appropriate size is always a challenge. Thus, the optimal capacitor placement problem is to determine the location and size of capacitors to be placed in distribution networks in an efficient way to reduce the power losses and improve the voltage profile of the system. For this purpose, in this paper, two stage methodologies are used. In the first stage, the load flow of pre-compensated distribution system is carried out using ‘dimension reducing distribution load flow algorithm (DRDLFA)’. On the basis of this load flow the potential locations of compensation are computed. In the second stage, Genetic Algorithm (GA) technique is used to determine the optimal location and size of the capacitors such that the cost of the energy loss and capacitor cost to be a minimum. The above method is tested on IEEE 9 and 34 bus system and compared with other methods in the literature.Keywords: dimension reducing distribution load flow algorithm, DRDLFA, genetic algorithm, electrical distribution network, optimal capacitors placement, voltage profile improvement, loss reduction
Procedia PDF Downloads 39011780 Seismic Protection of Automated Stocker System by Customized Viscous Fluid Dampers
Authors: Y. P. Wang, J. K. Chen, C. H. Lee, G. H. Huang, M. C. Wang, S. W. Chen, Y. T. Kuan, H. C. Lin, C. Y. Huang, W. H. Liang, W. C. Lin, H. C. Yu
Abstract:
The hi-tech industries in the Science Park at southern Taiwan were heavily damaged by a strong earthquake early 2016. The financial loss in this event was attributed primarily to the automated stocker system handling fully processed products, and recovery of the automated stocker system from the aftermath proved to contribute major lead time. Therefore, development of effective means for protection of stockers against earthquakes has become the highest priority for risk minimization and business continuity. This study proposes to mitigate the seismic response of the stockers by introducing viscous fluid dampers in between the ceiling and the top of the stockers. The stocker is expected to vibrate less violently with a passive control force on top. Linear damper is considered in this application with an optimal damping coefficient determined from a preliminary parametric study. The damper is small in size in comparison with those adopted for building or bridge applications. Component test of the dampers has been carried out to make sure they meet the design requirement. Shake table tests have been further conducted to verify the proposed scheme under realistic earthquake conditions. Encouraging results have been achieved by effectively reducing the seismic responses of up to 60% and preventing the FOUPs from falling off the shelves that would otherwise be the case if left unprotected. Effectiveness of adopting a viscous fluid damper for seismic control of the stocker on top against the ceiling has been confirmed. This technique has been adopted by Macronix International Co., LTD for seismic retrofit of existing stockers. Demonstrative projects on the application of the proposed technique are planned underway for other companies in the display industry as well.Keywords: hi-tech industries, seismic protection, automated stocker system, viscous fluid damper
Procedia PDF Downloads 35711779 Evaluation of Quick Covering Machine for Grain Drying Pavement
Authors: Fatima S. Rodriguez, Victorino T. Taylan, Manolito C. Bulaong, Helen F. Gavino, Vitaliana U. Malamug
Abstract:
In sundrying the quality of the grains are greatly reduced when paddy grains were caught by the rain unsacked and unstored resulting to reduced profit. The objectives of this study were to design and fabricate a quick covering machine for grain drying pavement; to test and evaluate the operating characteristics of the machine according to its deployment speed, recovery speed, deployment time, recovery time, power consumption, aesthetics of laminated sack; and to conduct partial budget and cost curve analysis. The machine was able to cover the grains in a 12.8 m x 22.5 m grain drying pavement at an average time of 17.13 s. It consumed 0.53 W-hr for the deployment and recovery of the cover. The machine entailed an investment cost of $1,344.40 and an annual cost charge of $647.32. Moreover, the savings per year using the quick covering machine was $101.83.Keywords: quick covering machine, grain drying pavement, laminated polypropylene, recovery time
Procedia PDF Downloads 32311778 Finding DEA Targets Using Multi-Objective Programming
Authors: Farzad Sharifi, Raziyeh Shamsi
Abstract:
In this paper, we obtain the projection of inefficient units in data envelopment analysis (DEA) in the case of stochastic inputs and outputs using the multi-objective programming (MOP) structure. In some problems, the inputs might be stochastic while the outputs are deterministic, and vice versa. In such cases, we propose molti-objective DEA-R model, because in some cases (e.g., when unnecessary and irrational weights by the BCC model reduces the efficiency score), an efficient DMU is introduced as inefficient by the BCC model, whereas the DMU is considered efficient by the DEA-R model. In some other case, only the ratio of stochastic data may be available (e.g; the ratio of stochastic inputs to stochastic outputs). Thus, we provide multi objective DEA model without explicit outputs and prove that in-put oriented MOP DEA-R model in the invariable return to scale case can be replacing by MOP- DEA model without explicit outputs in the variable return to scale and vice versa. Using the interactive methods for solving the proposed model, yields a projection corresponding to the viewpoint of the DM and the analyst, which is nearer to reality and more practical. Finally, an application is provided.Keywords: DEA, MOLP, STOCHASTIC, DEA-R
Procedia PDF Downloads 39811777 Comparison between Hardy-Cross Method and Water Software to Solve a Pipe Networking Design Problem for a Small Town
Authors: Ahmed Emad Ahmed, Zeyad Ahmed Hussein, Mohamed Salama Afifi, Ahmed Mohammed Eid
Abstract:
Water has a great importance in life. In order to deliver water from resources to the users, many procedures should be taken by the water engineers. One of the main procedures to deliver water to the community is by designing pressurizer pipe networks for water. The main aim of this work is to calculate the water demand of a small town and then design a simple water network to distribute water resources among the town with the smallest losses. Literature has been mentioned to cover the main point related to water distribution. Moreover, the methodology has introduced two approaches to solve the research problem, one by the iterative method of Hardy-cross and the other by water software Pipe Flow. The results have introduced two main designs to satisfy the same research requirements. Finally, the researchers have concluded that the use of water software provides more abilities and options for water engineers.Keywords: looping pipe networks, hardy cross networks accuracy, relative error of hardy cross method
Procedia PDF Downloads 16511776 A Comparative Analysis of the Performance of COSMO and WRF Models in Quantitative Rainfall Prediction
Authors: Isaac Mugume, Charles Basalirwa, Daniel Waiswa, Mary Nsabagwa, Triphonia Jacob Ngailo, Joachim Reuder, Sch¨attler Ulrich, Musa Semujju
Abstract:
The Numerical weather prediction (NWP) models are considered powerful tools for guiding quantitative rainfall prediction. A couple of NWP models exist and are used at many operational weather prediction centers. This study considers two models namely the Consortium for Small–scale Modeling (COSMO) model and the Weather Research and Forecasting (WRF) model. It compares the models’ ability to predict rainfall over Uganda for the period 21st April 2013 to 10th May 2013 using the root mean square (RMSE) and the mean error (ME). In comparing the performance of the models, this study assesses their ability to predict light rainfall events and extreme rainfall events. All the experiments used the default parameterization configurations and with same horizontal resolution (7 Km). The results show that COSMO model had a tendency of largely predicting no rain which explained its under–prediction. The COSMO model (RMSE: 14.16; ME: -5.91) presented a significantly (p = 0.014) higher magnitude of error compared to the WRF model (RMSE: 11.86; ME: -1.09). However the COSMO model (RMSE: 3.85; ME: 1.39) performed significantly (p = 0.003) better than the WRF model (RMSE: 8.14; ME: 5.30) in simulating light rainfall events. All the models under–predicted extreme rainfall events with the COSMO model (RMSE: 43.63; ME: -39.58) presenting significantly higher error magnitudes than the WRF model (RMSE: 35.14; ME: -26.95). This study recommends additional diagnosis of the models’ treatment of deep convection over the tropics.Keywords: comparative performance, the COSMO model, the WRF model, light rainfall events, extreme rainfall events
Procedia PDF Downloads 26111775 Security in Resource Constraints: Network Energy Efficient Encryption
Authors: Mona Almansoori, Ahmed Mustafa, Ahmad Elshamy
Abstract:
Wireless nodes in a sensor network gather and process critical information designed to process and communicate, information flooding through such network is critical for decision making and data processing, the integrity of such data is one of the most critical factors in wireless security without compromising the processing and transmission capability of the network. This paper presents mechanism to securely transmit data over a chain of sensor nodes without compromising the throughput of the network utilizing available battery resources available at the sensor node.Keywords: hybrid protocol, data integrity, lightweight encryption, neighbor based key sharing, sensor node data processing, Z-MAC
Procedia PDF Downloads 14511774 Optimal Dynamic Economic Load Dispatch Using Artificial Immune System
Authors: I. A. Farhat
Abstract:
The dynamic economic dispatch (DED) problem is one of the complex, constrained optimization problems that have nonlinear, con-convex and non-smooth objective functions. The purpose of the DED is to determine the optimal economic operation of the committed units while meeting the load demand. Associated to this constrained problem there exist highly nonlinear and non-convex practical constraints to be satisfied. Therefore, classical and derivative-based methods are likely not to converge to an optimal or near optimal solution to such a dynamic and large-scale problem. In this paper, an Artificial Immune System technique (AIS) is implemented and applied to solve the DED problem considering the transmission power losses and the valve-point effects in addition to the other operational constraints. To demonstrate the effectiveness of the proposed technique, two case studies are considered. The results obtained using the AIS are compared to those obtained by other methods reported in the literature and found better.Keywords: artificial immune system, dynamic economic dispatch, optimal economic operation, large-scale problem
Procedia PDF Downloads 23611773 Intracranial Hypertension without CVST in Apla Syndrome: An Unique Association
Authors: Camelia Porey, Binaya Kumar Jaiswal
Abstract:
BACKGROUND: Antiphospholipid antibody (APLA) syndrome is an autoimmune disorder predisposing to thrombotic complications affecting CNS either by arterial vasooclusion or venous thrombosis. Cerebral venous sinus thrombosis (CVST) secondarily causes raised intracranial pressure (ICP). However, intracranial hypertension without evidence of CVST is a rare entity. Here we present two cases of elevated ICP with absence of identifiable CVST. CASE SUMMARY: Case 1, 28-year female had a 2 months history of holocranial headache followed by bilateral painless vision loss reaching lack of light perception over 20 days. CSF opening pressure was elevated. Fundoscopy showed bilateral grade 4 papilledema. MRI revealed a partially empty sella with bilateral optic nerve tortuosity. Idiopathic intracranial hypertension (IIH) was diagnosed. With acetazolamide, there was complete resolution of the clinical and radiological abnormalities. 5 months later she presented with acute onset right-sided hemiparesis. MRI was suggestive of acute left MCA infarct.MR venogram was normal. APLA came positive with high titres of Anticardiolipin and Beta 2 glycoprotein both IgG and IgM. Case 2, 23-year female, presented with headache and diplopia of 2 months duration. CSF pressure was elevated and Grade 3 papilledema was seen. MRI showed bilateral optic nerve hyperintensities with nerve head protrusion with normal MRV. APLA profile showed elevated beta 2 glycoprotein IgG and IgA. CONCLUSION: This is an important non thrombotic complication of APLA syndrome and requires further large-scale study for insight into the pathogenesis and early recognition to avoid future complications.Keywords: APLA syndrome, idiopathic intracranial hypertension, MR venogram, papilledema
Procedia PDF Downloads 17711772 Tax Administration Constraints: The Case of Small and Medium Size Enterprises in Addis Ababa, Ethiopia
Authors: Zeleke Ayalew Alemu
Abstract:
This study aims to investigate tax administration constraints in Addis Ababa with a focus on small and medium-sized enterprises by identifying issues and constraints in tax administration and assessment. The study identifies problems associated with taxpayers and tax-collecting authorities in the city. The research used qualitative and quantitative research designs and employed questionnaires, focus group discussion and key informant interviews for primary data collection and also used secondary data from different sources. The study identified many constraints that taxpayers are facing. Among others, tax administration offices’ inefficiency, reluctance to respond to taxpayers’ questions, limited tax assessment and administration knowledge and skills, and corruption and unethical practices are the major ones. Besides, the tax laws and regulations are complex and not enforced equally and fully on all taxpayers, causing a prevalence of business entities not paying taxes. This apparently results in an uneven playing field. Consequently, the tax system at present is neither fair nor transparent and increases compliance costs. In case of dispute, the appeal process is excessively long and the tax authority’s decision is irreversible. The Value Added Tax (VAT) administration and compliance system is not well designed, and VAT has created economic distortion among VAT-registered and non-registered taxpayers. Cash registration machine administration and the reporting system are big headaches for taxpayers. With regard to taxpayers, there is a lack of awareness of tax laws and documentation. Based on the above and other findings, the study forwarded recommendations, such as, ensuring fairness and transparency in tax collection and administration, enhancing the efficiency of tax authorities by use of modern technologies and upgrading human resources, conducting extensive awareness creation programs, and enforcing tax laws in a fair and equitable manner. The objective of this study is to assess problems, weaknesses and limitations of small and medium-sized enterprise taxpayers, tax authority administrations, and laws as sources of inefficiency and dissatisfaction to forward recommendations that bring about efficient, fair and transparent tax administration. The entire study has been conducted in a participatory and process-oriented manner by involving all partners and stakeholders at all levels. Accordingly, the researcher used participatory assessment methods in generating both secondary and primary data as well as both qualitative and quantitative data on the field. The research team held FGDs with 21 people from Addis Ababa City Administration tax offices and selected medium and small taxpayers. The study team also interviewed 10 KIIs selected from the various segments of stakeholders. The lead, along with research assistants, handled the KIIs using a predesigned semi-structured questionnaire.Keywords: taxation, tax system, tax administration, small and medium enterprises
Procedia PDF Downloads 7311771 Forest Risk and Vulnerability Assessment: A Case Study from East Bokaro Coal Mining Area in India
Authors: Sujata Upgupta, Prasoon Kumar Singh
Abstract:
The expansion of large scale coal mining into forest areas is a potential hazard for the local biodiversity and wildlife. The objective of this study is to provide a picture of the threat that coal mining poses to the forests of the East Bokaro landscape. The vulnerable forest areas at risk have been assessed and the priority areas for conservation have been presented. The forested areas at risk in the current scenario have been assessed and compared with the past conditions using classification and buffer based overlay approach. Forest vulnerability has been assessed using an analytical framework based on systematic indicators and composite vulnerability index values. The results indicate that more than 4 km2 of forests have been lost from 1973 to 2016. Large patches of forests have been diverted for coal mining projects. Forests in the northern part of the coal field within 1-3 km radius around the coal mines are at immediate risk. The original contiguous forests have been converted into fragmented and degraded forest patches. Most of the collieries are located within or very close to the forests thus threatening the biodiversity and hydrology of the surrounding regions. Based on the vulnerability values estimated, it was concluded that more than 90% of the forested grids in East Bokaro are highly vulnerable to mining. The forests in the sub-districts of Bermo and Chandrapura have been identified as the most vulnerable to coal mining activities. This case study would add to the capacity of the forest managers and mine managers to address the risk and vulnerability of forests at a small landscape level in order to achieve sustainable development.Keywords: forest, coal mining, indicators, vulnerability
Procedia PDF Downloads 39011770 Numerical Simulation on Two Components Particles Flow in Fluidized Bed
Authors: Wang Heng, Zhong Zhaoping, Guo Feihong, Wang Jia, Wang Xiaoyi
Abstract:
Flow of gas and particles in fluidized beds is complex and chaotic, which is difficult to measure and analyze by experiments. Some bed materials with bad fluidized performance always fluidize with fluidized medium. The material and the fluidized medium are different in many properties such as density, size and shape. These factors make the dynamic process more complex and the experiment research more limited. Numerical simulation is an efficient way to describe the process of gas-solid flow in fluidized bed. One of the most popular numerical simulation methods is CFD-DEM, i.e., computational fluid dynamics-discrete element method. The shapes of particles are always simplified as sphere in most researches. Although sphere-shaped particles make the calculation of particle uncomplicated, the effects of different shapes are disregarded. However, in practical applications, the two-component systems in fluidized bed also contain sphere particles and non-sphere particles. Therefore, it is needed to study the two component flow of sphere particles and non-sphere particles. In this paper, the flows of mixing were simulated as the flow of molding biomass particles and quartz in fluidized bad. The integrated model was built on an Eulerian–Lagrangian approach which was improved to suit the non-sphere particles. The constructed methods of cylinder-shaped particles were different when it came to different numerical methods. Each cylinder-shaped particle was constructed as an agglomerate of fictitious small particles in CFD part, which means the small fictitious particles gathered but not combined with each other. The diameter of a fictitious particle d_fic and its solid volume fraction inside a cylinder-shaped particle α_fic, which is called the fictitious volume fraction, are introduced to modify the drag coefficient β by introducing the volume fraction of the cylinder-shaped particles α_cld and sphere-shaped particles α_sph. In a computational cell, the void ε, can be expressed as ε=1-〖α_cld α〗_fic-α_sph. The Ergun equation and the Wen and Yu equation were used to calculate β. While in DEM method, cylinder-shaped particles were built by multi-sphere method, in which small sphere element merged with each other. Soft sphere model was using to get the connect force between particles. The total connect force of cylinder-shaped particle was calculated as the sum of the small sphere particles’ forces. The model (size=1×0.15×0.032 mm3) contained 420000 sphere-shaped particles (diameter=0.8 mm, density=1350 kg/m3) and 60 cylinder-shaped particles (diameter=10 mm, length=10 mm, density=2650 kg/m3). Each cylinder-shaped particle was constructed by 2072 small sphere-shaped particles (d=0.8 mm) in CFD mesh and 768 sphere-shaped particles (d=3 mm) in DEM mesh. The length of CFD and DEM cells are 1 mm and 2 mm. Superficial gas velocity was changed in different models as 1.0 m/s, 1.5 m/s, 2.0m/s. The results of simulation were compared with the experimental results. The movements of particles were regularly as fountain. The effect of superficial gas velocity on cylinder-shaped particles was stronger than that of sphere-shaped particles. The result proved this present work provided a effective approach to simulation the flow of two component particles.Keywords: computational fluid dynamics, discrete element method, fluidized bed, multiphase flow
Procedia PDF Downloads 32611769 Zeolite 4A-confined Ni-Co Nanocluster: An Efficient and Durable Electrocatalyst for Alkaline Methanol Oxidation Reaction
Authors: Sarmistha Baruah, Akshai Kumar, Nageswara Rao Peela
Abstract:
The global energy crisis due to the dependence on fossil fuels and its limited reserves as well as environmental pollution are key concerns to the research communities. However, the implementation of alcohol-based fuel cells such as methanol is anticipated as a reliable source of future energy technology due to their high energy density, environment friendliness, ease of storage, transportation, etc. To drive the anodic methanol oxidation reaction (MOR) in direct methanol fuel cells (DMFCs), an active and long-lasting catalyst is necessary for efficient energy conversion from methanol. Recently, transition metal-zeolite-based materials have been considered versatile catalysts for a variety of industrial and lab-scale processes. Large specific surface area, well-organized micropores, and adjustable acidity/basicity are characteristics of zeolites that make them excellent supports for immobilizing small-sized and highly dispersed metal species. Significant advancement in the production and characterization of well-defined metal clusters encapsulated within zeolite matrix has substantially expanded the library of materials available, and consequently, their catalytic efficacy. In this context, we developed bimetallic Ni-Co catalysts encapsulated within LTA (also known as 4A) zeolite via a method combined with the in-situ encapsulation of metal species using hydrothermal treatment followed by a chemical reduction process. The prepared catalyst was characterized using advanced characterization techniques, such as X-ray diffraction (XRD), field emission transmission electron microscope (FETEM), field emission scanning electron microscope (FESEM), energy dispersive X-ray (EDX), and X-ray photoelectron spectroscopy (XPS). The electrocatalytic activity of the catalyst for MOR was carried out in an alkaline medium at room temperature using techniques such as cyclic voltammetry (CV), and chronoamperometry (CA). The resulting catalyst exhibited better catalytic activity of 12.1 mA cm-2 at 1.12 V vs Ag/AgCl and retained remarkable stability (~77%) even after 1000 cycles CV test for the electro-oxidation of methanol in alkaline media without any significant microstructural changes. The high surface area, better Ni-Co species integration in the zeolite, and the ample amount of surface hydroxyl groups contribute to highly dispersed active sites and quick analyte diffusion, which provide notable MOR kinetics. Thus, this study will open up new possibilities to develop a noble metal-free zeolite-based electrocatalyst due to its simple synthesis steps, large-scale fabrication, improved stability, and efficient activity for DMFC application.Keywords: alkaline media, bimetallic, encapsulation, methanol oxidation reaction, LTA zeolite.
Procedia PDF Downloads 6511768 Physicochemical Characterization of Waste from Vegetal Extracts Industry for Use as Briquettes
Authors: Maíra O. Palm, Cintia Marangoni, Ozair Souza, Noeli Sellin
Abstract:
Wastes from a vegetal extracts industry (cocoa, oak, Guarana and mate) were characterized by particle size, proximate and ultimate analysis, lignocellulosic fractions, high heating value, thermal analysis (Thermogravimetric analysis – TGA, and Differential thermal analysis - DTA) and energy density to evaluate their potential as biomass in the form of briquettes for power generation. All wastes presented adequate particle sizes to briquettes production. The wastes showed high moisture content, requiring previous drying for use as briquettes. Cocoa and oak wastes had the highest volatile matter contents with maximum mass loss at 310 ºC and 450 ºC, respectively. The solvents used in the aroma extraction process influenced in the moisture content of the wastes, which was higher for mate due to water has been used as solvent. All wastes showed an insignificant loss mass after 565 °C, hence resulting in low ash content. High carbon and hydrogen contents and low sulfur and nitrogen contents were observed ensuring a low generation of sulfur and nitrous oxides. Mate and cocoa exhibited the highest carbon and lignin content, and high heating value. The dried wastes had high heating value, from 17.1 MJ/kg to 20.8 MJ/kg. The results indicate the energy potential of wastes for use as fuel in power generation.Keywords: agro-industrial waste, biomass, briquettes, combustion
Procedia PDF Downloads 20611767 Preliminary Results of Psychiatric Morbidity for Oncology Outpatients
Authors: Camille Plant, Katherine McGill, Pek Ang
Abstract:
Oncology patients face a host of unique challenges, which are physical, psychological and philosophical in nature. This preliminary study aimed to explore the psychiatric morbidity of oncology patients in an outpatient setting at a major public hospital in Australia. The study found that 33 patients were referred to a Psychiatrist by a Clinical Psychologist or treating Oncologist. These patients attended an outpatient Psychiatry appointment at the Calvary Mater Hospital, Newcastle, over a 7 month period (June 2017-January 2018). Of these, 45% went on to have a follow-up appointment. The Clinical Global Impressions Scale (CGI) was used to gather symptom severity scores at baseline and at follow-up. The CGI is a clinician determined instrument that provides an assessment of global functioning. It is comprised of two companion one-item measures: the CGI-Severity (CGI-S) rates mental illness severity, and the CGI-Improvement (CGI-I) rates change in condition or improvement from initiation of treatment. Patients referred to a Psychiatrist were observed to be on average in the Markedly ill approaching Severely ill range (CGI-S average of 5.5). However, those patients who attended a follow-up appointment were on average only Moderately Ill at baseline (CGI-S average of 3.9). Despite these follow patients not being severely mentally ill initially, the contact was helpful, as their CGI-S scores improved on average to the Mildly Ill range (CGI-S average of 2.8). A Mixed ANOVA revealed that there was a significant improvement in mental illness severity post-follow-up appointment (Greenhouse-Geisser .000). There was a near even proportion of males and females attending appointments (58% female), and slightly more females attended a follow-up (60% female). Males were on average more mentally ill at baseline compared to females at baseline (male average M=3.86, female average M=3.56), and males had a greater reduction in mental illness severity on average compared to females (male average M=2.71, female average 3.00). This was approaching significance (.073) and would be important to explore with a larger sample size. Change in clinical condition for follow-up patients was also recorded. It was found that more than half of patients (53%) were observed to experience Minimal improvement in attending at least one follow-up appointment. There was no change for 27% of patients, and there were no patients who were worse at follow up. As this was a preliminary study with small sample size, future research conducted could explore whether there are any significant gender differences, such as whether males experience the significantly greater reduction in symptoms of mental illness compared to females, as well as any effects of cancer stage or type on psychiatric outcomes. Future research could also investigate outcomes for those patients who concurrently access a Clinical Psychologist alongside the Psychiatrist. A limitation of the study is that the outcome measure is a brief item rating completed by the clinician.Keywords: clinical global impressions scale, psychiatry, morbidity, oncology, outcomes, psychiatry
Procedia PDF Downloads 14711766 The Real Consignee: An Exploratory Study of the True Party who is Entitled to Receive Cargo under Bill of Lading
Authors: Mojtaba Eshraghi Arani
Abstract:
According to the international conventions for the carriage of goods by sea, the consignee is the person who is entitled to take delivery of the cargo from the carrier. Such a person is usually named in the relevant box of the bill of lading unless the latter is issued “To Order” or “To Bearer”. However, there are some cases in which the apparent consignee, as above, was not intended to take delivery of cargo, like the L/C issuing bank or the freight forwarder who are named as consignee only for the purpose of security or acceleration of transit process. In such cases as well as the BL which is issued “To Order”, the so-called “real consignee” can be found out in the “Notify Party” box. The dispute revolves around the choice between apparent consignee and real consignee for being entitled not only to take delivery of the cargo but also to sue the carrier for any damages or loss. While it is a generally accepted rule that only the apparent consignee shall be vested with such rights, some courts like France’s Cour de Cassation have declared that the “Notify Party”, as the real consignee, was entitled to sue the carrier and in some cases, the same court went far beyond and permitted the real consignee to take suit even where he was not mentioned on the BL as a “Notify Party”. The main argument behind such reasoning is that the real consignee is the person who suffered the loss and thus had a legitimate interest in bringing action; of course, the real consignee must prove that he incurred a loss. It is undeniable that the above-mentioned approach is contrary to the position of the international conventions on the express definition of consignee. However, international practice has permitted the use of BL in a different way to meet the business requirements of banks, freight forwarders, etc. Thus, the issue is one of striking a balance between the international conventions on the one hand and existing practices on the other hand. While the latest convention applicable for sea transportation, i.e., the Rotterdam Rules, dealt with the comparable issue of “shipper” and “documentary shipper”, it failed to cope with the matter being discussed. So a new study is required to propose the best solution for amending the current conventions for carriage of goods by sea. A qualitative method with the concept of interpretation of data collection has been used in this article. The source of the data is the analysis of domestic and international regulations and cases. It is argued in this manuscript that the judge is not allowed to recognize any one as real consignee, other than the person who is mentioned in the “Consingee” box unless the BL is issued “To Order” or “To Bearer”. Moreover, the contract of carriage is independent of the sale contract and thus, the consignee must be determined solely based on the facts of the BL itself, like “Notify Party” and not any other contract or document.Keywords: real consignee, cargo, delivery, to order, notify the party
Procedia PDF Downloads 79